Setup an Angular App using NGINX as a reverse proxy and SSL Certificate
In this article, I’ll guide you on:
- How to containerize Angular app and run it using Docker-compose
- How to access it from the outside world by setting up NGINX as a reverse proxy
- Adding an extra layer of security by installing an SSL certificate for a safer connection
To create a Docker container for the angular app, I’m using a private git repository but the process will be the same for all angular apps.
If you have any suggestions and improvements, they are always appreciated.
Containerize the Angular app and run it using Docker-compose
To get started you will need a VPS (Ubuntu), you can choose any provider of your choice. Do SSH into the server and run the following commands.
$ sudo apt update
It’s best practice to create a superuser rather than creating everything in the root directory.
$ sudo adduser calivert
This will ask you to set a password. Enter a password that you can remember.
After this add created user to the sudo group.
$ sudo usermod -aG sudo calivert
Switch to the superuser using.
$ su - calivert
Now install docker and docker-compose. See Docker and Docker-compose or refer official site.
Verify docker and docker-compose installation by running.
$ docker -v
$ docker-compose -v
As we switched to a new superuser created a new directory and cd into it.
$ mkdir frontend
$ cd frontend
Now, clone the repository or copy Your angular app code using FTP clients like FileZilla to this new directory.
Create a new file called Dockerfile
that will be located in the project’s root folder.
# Stage 1: Use the official Node.js Alpine image as the base image
FROM node:21-alpine3.18 as build
# Set the working directory inside the container
WORKDIR /app
# Copy necessary files for dependency installation
COPY package.json package-lock.json angular.json
# Install the Angular CLI
RUN npm install -g @angular/cli
# Install Yarn package manager
RUN apk add yarn
# Install project dependencies using Yarn
RUN yarn install
# Copy the entire application to the container
COPY . .
# Build the Angular app with production configuration
RUN ng build --configuration=production
# Stage 2: Create a new image with a smaller base image (NGINX)
FROM nginx:1.25.3-alpine-slim
# Copy the NGINX configuration file to the appropriate location
COPY nginx.conf /etc/nginx/nginx.conf
# Copy the built Angular app from the 'calipharma' image to the NGINX HTML directory
COPY --from=build /app/dist/calipharma /usr/share/nginx/html
# Specify the command to run NGINX in the foreground
CMD ["nginx", "-g", "daemon off;"]
FROM
Using an alpine base image with node.js version 21 naming this build as build you can give any name of your choice but it is not mandatoryWORKDIR
Setting the working directory to /appCOPY
Copy all necessary files before copying the app code
Note: This is a crucial step to gain the real advantage of the layer catching mechanism of Docker.
RUN
Installing the Angular CLI- RUNAnd Installing the Yarn package manager
Note: Here I’m building an app with yarn package manager, you can use NPM too.
RUN
Install project dependencies using YarnCOPY
Copy the entire application to the containerRUN
Build the Angular app with production configurationFROM
Stage two of the build process with NGINX as the base imageCOPY
Copy the NGINX configuration file to the appropriate locationCOPY
Copy the built Angular app from the first build image to the NGINX HTML directoryCMD
instruction is configuring NGINX to run in the foreground when the container starts, allowing it to be the main process that keeps the container running. It ensures that the container doesn’t exit immediately after starting, which is essential for Docker containers to stay alive as long as the service inside them is running.
This Dockerfile implemented a multi-stage build.
Why multistage build is important?
· Small image size: The final image size is much smaller than a normal build image because it only copies artifacts or binaries from one stage to another. This results in reducing storage and improving network transfer speed.
· More secure: We should keep only the required dependencies because packages and dependencies can be potential sources of vulnerability for attackers. A multistage build is more secure because the final image includes only what it needs to run the application.
· Faster deployment: A reduced image size leads to faster image uploads and deployments, improving CI/CD pipeline speed. Smaller images also consume less storage space, resulting in quicker CI/CD builds, faster deployment times, and improved performance.
· Cost effective: the cost factor here is pretty negligible, but should not be ignored. Smaller images contribute to cost savings in cloud environments as you pay for storage and data transfer. Faster builds also mean reduced resource usage in cloud-based CI/CD platforms, potentially leading to lower costs.
To learn more about multi-stage docker builds click here
Create a new file called nginx.conf
that will be located in the project’s root folder.
By default Nginx configuration file looks as follows
events{}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
}
I don’t want to go into too much detail about what each line means here (if you would like there are two very nice links with more explanation at the end of this article). In general, here we define the server on which the application will be hosted, its port, and default behavior.
Finally, create docker-compose.yaml
file to build and start our container
My docker-compose file looks like this:
version: '3'
services:
calipharma:
image: calipharmafrontend:1
container_name: calipharma
ports:
- "8000:80"
restart: always
networks:
- calipharmanetwork
networks:
calipharmanetwork:
driver: bridge
This file will build and start a docker container with the image calipharmafrontend, container name calipharma container port 80 will be bound to 8000 host port and will create network calipharmanetwork
The Docker build command equivalent to this docker-compose.yaml is
$ sudo docker build -t calipharmafrontend:1 .
Docker run command equivalent to this docker-compose.yaml is
$ sudo docker run -d -p 8000:80 --name calipharma –network calipharmanetwork calipharmafrontend:1
If you like to learn docker and docker-compose. Refer to docker and docker compose YouTube playlist
Run the following docker-compose command to start the container
$ sudo docker-compose -f docker-compose.yaml -d up
Check the running status of the container
$ sudo docker ps -a
To learn docker compose click here
You need to allow inbound traffic on port 8000, this will allow you to access your container
$ sudo ufw allow 8000
$ sudo ufw enable
This will give you warning type y and hit enter
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
Now run this command because the previous ufw enable command blocked the SSH port this will open it again
$ sudo ufw allow openssh
Rule added
Rule added (v6)
$ sudo ufw status
Status: active
To Action From
-- ------ ----
8000 ALLOW Anywhere
8000/tcp ALLOW Anywhere
OpenSSH ALLOW Anywhere
8000 (v6) ALLOW Anywhere (v6)
8000/tcp (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
Now type your server IP address in a browser with port 8000
http://<your_server_IP>:8000
You will see your container is running
Configuring NGINX as a reverse proxy
First step is Install nginx. You can refer to this or just run the command below.
$ sudo apt install nginx
Create a configuration file for Nginx using the following command
$ sudo vim /etc/nginx/sites-available/pharmacy.calivert.com
Here create file name similar to your domain name
Paste the below contents inside the file created
Don’t forget to replace <server_ip>
with our server IP
server {
listen 80;
server_name <server_ip>;
location = /favicon.ico {
access_log off;
log_not_found off;
}
}
Activate the configuration using the following command
$ sudo ln -s /etc/nginx/sites-available/pharmacy.calivert.com /etc/nginx/sites-enabled/
Restart nginx and allow the changes to take place.
$ sudo systemctl restart nginx
Run below command, this will allow incoming connections to your Nginx web server. This is important for ensuring that external users can access your web server
$ sudo ufw allow 'Nginx Full'
output
Rule added
Rule added (v6)
$ sudo ufw status
output
Status: active
To Action From
-- ------ ----
8000 ALLOW Anywhere
8000/tcp ALLOW Anywhere
Nginx Full ALLOW Anywhere
OpenSSH ALLOW Anywhere
8000 (v6) ALLOW Anywhere (v6)
8000/tcp (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
Check the installation by curling or hitting the server IP address in a browser
$ curl http://<server_ip>
This will return status code 200 means the site is running
Or
Browse to http://<server_ip>
will give the following output
Nginx is successfully installed, Let’s use it as reverse proxy.
We know our container is running on localhost’s port 8000, we need to bind this port with our nginx http port, for that refer following code. Paste this code in the nginx configuration file which we have created earlier. Don’t forget to replace <server_IP>
with your real IP.
server {
listen 80;
server_name <server_IP>;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Restart Nginx and allow the changes to take place.
$ sudo systemctl restart nginx
We successfully set up NGINX as the reverse proxy.
Check it by browsing http://<server_ip>
Installing SSL certificate
Why is an SSL certificate important? See this
Check this guide to install SSL certificates.
Follow each step as it is only make sure to replace the domain name with your actual domain.
Certbot typically updates the configuration file for the specified domain or site to include SSL certificate-related configurations. However, you must still manually check and add the required configurations to the configuration file.
Refer following configuration.
server {
listen 80;
server_name pharmacy.calivert.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name pharmacy.calivert.com;
ssl_certificate /etc/letsencrypt/live/pharmacy.calivert.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/pharmacy.calivert.com/privkey.pem;
location = /favicon.ico {
access_log off;
log_not_found off;
}
location / {
proxy_pass http://localhost:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
What does the above configuration tell?
Let’s go through all main aspects…
The first server block redirects all HTTP traffic to HTTPS.
In the second server block, paths to ssl_certificate
and ssl_certificate_key
are specified, which are necessary for configuring SSL certificates in the web server. Everything apart from these will remain same. Save the file and run
$ sudo systemctl restart nginx
If you don’t get any error after running command, Congratulations! You have successfully installed an SSL certificate for your domain.
Verify installation by browsing to your domain address.
We’re done!
References
- How To Install and Use Docker on Ubuntu 20.04
- Install Docker Engine on Ubuntu
- How To Install and Use Docker Compose on Ubuntu 20.04
- How To Install Nginx on Ubuntu 20.04
- How To Secure Nginx with Let’s Encrypt on Ubuntu 20.04