If you’re like me and prefer to keep your system as clean as possible, you might avoid Homebrew unless absolutely necessary. TA-Lib is one of the few exceptions—until now. Fortunately, compiling TA-Lib from source is simple and doesn’t require Homebrew at all.
This step-by-step guide has been tested on macOS 11.1 Big Sur and runs flawlessly on Apple Silicon (M1) machines.
Step 1: Download the Source Code
Begin by downloading the official source package for TA-Lib from the Here. Once the archive is downloaded, extract it and navigate into the directory:
tar xf ta-lib-0.4.0-src.tar.gz
cd ta-lib
Step 2: Compile and Install
Now compile and install the library using the following commands:
./configure --prefix=/usr/local
make
sudo make install
Step 3: Install the Python Wrapper
With the native library in place, you can now install the Python bindings:
pip install TA-Lib
That’s it! You’ve installed TA-Lib without touching Homebrew.
How to containerize Angular app and run it using Docker-compose
How to access it from the outside world by setting up NGINX as a reverse proxy
Adding an extra layer of security by installing an SSL certificate for a safer connection
To create a Docker container for the angular app, I’m using a private git repository but the process will be the same for all angular apps.
If you have any suggestions and improvements, they are always appreciated.
Containerize the Angular app and run it using Docker-compose To get started you will need a VPS (Ubuntu), you can choose any provider of your choice. Do SSH into the server and run the following commands.
$ sudo apt update
It’s best practice to create a superuser rather than creating everything in the root directory.
$ sudo adduser calivert
This will ask you to set a password. Enter a password that you can remember.
Verify docker and docker-compose installation by running.
$ docker -v
$ docker-compose -v
As we switched to a new superuser created a new directory and cd into it.
$ mkdir frontend
$ cd frontend
Now, clone the repository or copy Your angular app code using FTP clients like FileZilla to this new directory.
Create a new file called Dockerfile that will be located in the project’s root folder.
# Stage 1: Use the official Node.js Alpine image as the base image
FROM node:21-alpine3.18 as build
# Set the working directory inside the container
WORKDIR /app
# Copy necessary files for dependency installation
COPY package.json package-lock.json angular.json
# Install the Angular CLI
RUN npm install -g @angular/cli
# Install Yarn package manager
RUN apk add yarn
# Install project dependencies using Yarn
RUN yarn install
# Copy the entire application to the container
COPY . .
# Build the Angular app with production configuration
RUN ng build --configuration=production
# Stage 2: Create a new image with a smaller base image (NGINX)
FROM nginx:1.25.3-alpine-slim
# Copy the NGINX configuration file to the appropriate location
COPY nginx.conf /etc/nginx/nginx.conf
# Copy the built Angular app from the 'calipharma' image to the NGINX HTML directory
COPY --from=build /app/dist/calipharma /usr/share/nginx/html
# Specify the command to run NGINX in the foreground
CMD ["nginx", "-g", "daemon off;"]
FROMUsing an alpine base image with node.js version 21 naming this build as build you can give any name of your choice but it is not mandatory
WORKDIR Setting the working directory to /app
COPY Copy all necessary files before copying the app code
Note: This is a crucial step to gain the real advantage of the layer catching mechanism of Docker.
RUN Installing the Angular CLI
RUNAnd Installing the Yarn package manager
Note: Here I’m building an app with yarn package manager, you can use NPM too.
RUN Install project dependencies using Yarn
COPY Copy the entire application to the container
RUN Build the Angular app with production configuration
FROM Stage two of the build process with NGINX as the base image
COPY Copy the NGINX configuration file to the appropriate location
COPY Copy the built Angular app from the first build image to the NGINX HTML directory
CMD instruction is configuring NGINX to run in the foreground when the container starts, allowing it to be the main process that keeps the container running. It ensures that the container doesn’t exit immediately after starting, which is essential for Docker containers to stay alive as long as the service inside them is running.
This Dockerfile implemented a multi-stage build.
Why multistage build is important?
· Small image size: The final image size is much smaller than a normal build image because it only copies artifacts or binaries from one stage to another. This results in reducing storage and improving network transfer speed.
· More secure: We should keep only the required dependencies because packages and dependencies can be potential sources of vulnerability for attackers. A multistage build is more secure because the final image includes only what it needs to run the application.
· Faster deployment: A reduced image size leads to faster image uploads and deployments, improving CI/CD pipeline speed. Smaller images also consume less storage space, resulting in quicker CI/CD builds, faster deployment times, and improved performance.
· Cost effective: the cost factor here is pretty negligible, but should not be ignored. Smaller images contribute to cost savings in cloud environments as you pay for storage and data transfer. Faster builds also mean reduced resource usage in cloud-based CI/CD platforms, potentially leading to lower costs.
To learn more about multi-stage docker builds click here
Create a new file called nginx.conf that will be located in the project’s root folder.
By default Nginx configuration file looks as follows
events{}
http {
include /etc/nginx/mime.types;
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
}
I don’t want to go into too much detail about what each line means here (if you would like there are two very nice links with more explanation at the end of this article). In general, here we define the server on which the application will be hosted, its port, and default behavior.
Finally, create docker-compose.yaml file to build and start our container
This file will build and start a docker container with the image calipharmafrontend, container name calipharma container port 80 will be bound to 8000 host port and will create network calipharmanetwork
The Docker build command equivalent to this docker-compose.yaml is
$ sudo docker build -t calipharmafrontend:1 .
Docker run command equivalent to this docker-compose.yaml is
Restart nginx and allow the changes to take place.
$ sudo systemctl restart nginx
Run below command, this will allow incoming connections to your Nginx web server. This is important for ensuring that external users can access your web server
$ sudo ufw allow 'Nginx Full'
output
Rule added
Rule added (v6)
$ sudo ufw status
output
Status: active
To Action From
-- ------ ----
8000 ALLOW Anywhere
8000/tcp ALLOW Anywhere
Nginx Full ALLOW Anywhere
OpenSSH ALLOW Anywhere
8000 (v6) ALLOW Anywhere (v6)
8000/tcp (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
Check the installation by curling or hitting the server IP address in a browser
$ curl http://<server_ip>
This will return status code 200 means the site is running
Or
Browse to http://<server_ip> will give the following output
Nginx is successfully installed, Let’s use it as reverse proxy.
We know our container is running on localhost’s port 8000, we need to bind this port with our nginx http port, for that refer following code. Paste this code in the nginx configuration file which we have created earlier. Don’t forget to replace <server_IP> with your real IP.
Follow each step as it is only make sure to replace the domain name with your actual domain.
Certbot typically updates the configuration file for the specified domain or site to include SSL certificate-related configurations. However, you must still manually check and add the required configurations to the configuration file.
The first server block redirects all HTTP traffic to HTTPS.
In the second server block, paths to ssl_certificate and ssl_certificate_key are specified, which are necessary for configuring SSL certificates in the web server. Everything apart from these will remain same. Save the file and run
$ sudo systemctl restart nginx
If you don’t get any error after running command, Congratulations! You have successfully installed an SSL certificate for your domain.
Verify installation by browsing to your domain address.
In the ever-evolving world of software development and system administration, virtual environments play a crucial role. They enable isolation, reproducibility, and efficient management of dependencies. Whether you’re a developer, system administrator, or simply curious, this guide will walk you through the installation, usage, and advantages of virtual environments on Windows, macOS, and Linux.
1: Installation of Virtual Environments In this section, we’ll cover how to install virtual environment tools on each of the three major operating systems.
Windows: To create virtual environments on Windows, we recommend using Python’s built-in tool, virtualenv. Here’s how to install it:
pip install virtualenv
macOS: On macOS, you can use virtualenv it as well. Install it using pip:
pip install virtualenv
Linux: Linux users often prefer virtualenv for Python and venv for system-wide virtual environments. To install virtualenv:
pip install virtualenv
Section 2: Creating and Using Virtual Environments Now that you have the tools installed, let’s create and use virtual environments.
Creating a Virtual Environment:
Open your terminal.
Navigate to the project directory where you want to create the virtual environment.
Run the following command:
virtualenv venv # Replace 'venv' with your preferred environment name
Activating the Virtual Environment:
Windows:
venv\Scripts\activate
macOS and Linux:
source venv/bin/activate
Installing Packages: Within the virtual environment, use pip to install packages. For example:
Isolation: They keep project dependencies separate, preventing conflicts.
Reproducibility: Ensure consistent development environments across different systems.
Dependency Management: Easily switch between different versions of packages.
Resource Efficiency: They consume fewer resources compared to global installations.
Section 4: When to Use Virtual Environments Discuss scenarios where virtual environments are beneficial, such as:
Web Development: Isolate dependencies for different projects.
Data Science: Maintain distinct environments for various data analysis tasks.
System Administration: Safely test software installations.
Section 5: Additional Tips and Tricks Provide tips on managing virtual environments effectively:
Use requirements.txt to list project dependencies.
Consider using tools like conda for more complex environments.
Automate virtual environment creation with scripts.
A requirements.txt file is a text file commonly used in Python projects to list all the external libraries and their versions that are required for the project to run. Here’s an example of how a requirements.txt file might look:
# This is a comment, and it will be ignored by pip
requests==2.26.0
numpy>=1.21.1
pandas==1.3.3
flask
In the example above:
Each line in the file represents a package or library that your project depends on.
The == operator is used to specify a specific version of the package.
The >= operator is used to specify a minimum version of the package.
If you don’t specify a version, it means you’re okay with any version, and the latest version will be installed.
You can add comments in the file by starting a line with, as shown in the first line.
When you have a requirements.txt file like this, you can use the pip command to install all the dependencies listed in the file. For example:
pip install -r requirements.txt
This command will read the requirements.txt file and install the specified packages and their versions, ensuring that your project uses the correct dependencies.
Conclusion: Virtual environments are invaluable tools in modern computing. They empower developers and administrators to work efficiently and maintain clean, reproducible environments. By following the steps outlined in this guide, you can harness the power of virtual environments on Windows, macOS, and Linux to streamline your work and enhance your productivity.
Python Flask stands out as a favored platform to craft web applications and APIs using Python. Its appeal lies in offering developers a swift and uncomplicated approach to fabricating RESTful APIs that interlink with various software applications. Flask holds a featherweight stature and demands minimal configuration, rendering it an optimal selection for constructing modest to moderately-sized APIs. Hence, Flask emerges as the prime option for programmers seeking to erect resilient and expandable APIs within the Python ecosystem. This instance will elucidate the process of generating a straightforward REST API via a Flask tutorial. (more…)