17.04.2025

How to deploy n8n with AI via npm?

AI greatly simplified the solution of many monotonous tasks that required skills that only humans possessed. Summarization, reasoning, generalization, however, business integration has been an expensive decision for quite a long time. Companies have locally implemented various features into their products, but they have not freed workflows from human skills.

The n8n product has not yet been released, which is a platform for automating various business processes with a free Community License. With the Low-code approach in the project, the entry threshold is reduced and minimal ideas about the operation of the system are needed, which we will consider in this material!

What is n8n and how do I install it?

n8n is an application server that contains a library of nodes or blocks, where each performs its own function of interacting with external services or internal data processing, after which the nodes bring the data to a single view and transmit it along the built chain. Classically, they look like this:

Screenshot № 1 — Intro

The installation process does not take much time, it is enough to download the image via Docker or the project via the npm package manager. In recent materials, we have already considered the method with containers, so in this case we will solve the problem in the second way.

If you do not have your own capacities or a static white address necessary for operation, then you can use VPS from Serverspace.

We will pre-install the necessary dependencies for our environment to work. To do this, download the cross-platform interpreter manager.:

curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.2/install.sh | bash

And we will install the latest version of NodeJS through its repositories.:

\. “$HOME/.nvm/nvm.sh”

nvm install 20

Screenshot № 2 — Installation

In the same way, we will download the project only through the package manager and add environment variables for the platform configuration.:

npm install n8n -g \

echo "export N8N_HOST=0.0.0.0
export N8N_PORT=3333
export N8N_PROTOCOL=http
export N8N_PATH=http://0.0.0.0:3333/
export N8N_BASIC_AUTH_ACTIVE=true
export N8N_BASIC_AUTH_USER=admin
export N8N_BASIC_AUTH_PASSWORD=supersecurepassword
export TZ=Europe/Moscow
export N8N_LOG_LEVEL=info
export N8N_LOG_OUTPUT=console
export N8N_SECURE_COOKIE=false" | tee -a ~/.bashrc

Screenshot № 3 — Configuration

Replace the fields with your own: login and password, as well as specify the necessary network settings. If your node has a domain name, then add it to the PATH instead of the IP address. After that, run:

n8n

Screenshot № 4 — Launch service

After a successful launch, you can go to the previously specified URL http://x.x.x.x:3333 /, please note that if you want to set up an SSL connection, remove the environment variable N8N_SECURE_COOKIE=false.

How to work with n8n?

Initially, there is a trigger at the head of the chain, it can be a message from the buyer in a telegram or a new document in Google Docs, and there is also an endpoint where the analytics/modification carried out by n8n arrives. In fact, absolutely any sequence of nodes can be built between these two points.

For example, consider a simple scheme in which the n8n chat will be connected to the GPT model provided by Serverspace via the API. It can analyze any of your data streams and provide analytics, just like the GPT-4 model. The scheme itself looks like this:

Screenshot № 5 — PHI-4

The trigger is a chat in which we can communicate with the model. After receiving a Microsoft Phi-4 node message, it turns to the AI outside and, having received the result, presents it in the form of variables, where it can already be written to any place, including Google Docs. You can download the scheme for working with Microsoft Phi-4 at Serverspace facilities from our file and then import it into the panel by clicking the three dots on the right. Don't forget to add a key!

You just need to go to our dashboard and create an API key, and then copy it:

Screenshot № 6 — API key

Open the Phi-4 node and paste it in its entirety in the Value field from the Authorization header. Then log in to the chat and write any request, just like a classic GPT bot:

Screenshot № 7 — Node

You can modify this scheme as you like and implement the AI block from Serverspace in any process. Look at the screenshot above and note that we use the chatInput variable to generate an API request, and then JSON is sent to the Output, where we have selected the string “content” for the filter, after which it is sent back to the chat.