Skip to content

Commit 255e7ec

Browse files
committed
configure dev env part
1 parent 2ff288b commit 255e7ec

File tree

1 file changed

+44
-1
lines changed

1 file changed

+44
-1
lines changed

tutorials/how-to-implement-rag/index.mdx

+44-1
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,11 @@
22
meta:
33
title: How to implement RAG with managed inference
44
description:
5+
content:
6+
h1: How to implement RAG with managed inference
7+
tags: inference managed postgresql pgvector object storage
8+
categories:
9+
- inference
510
---
611

712
RAG (Retrieval-Augmented Generation) is a powerful approach for enhancing a model's knowledge by leveraging your own dataset.
@@ -14,4 +19,42 @@ By utilizing our managed inference services, managed databases, and object stora
1419
- [Inference Deployment](/ai-data/managed-inference/how-to/create-deployment/): Set up an inference deployment using [sentence-transformers/sentence-t5-xxl](/ai-data/managed-inference/reference-content/sentence-t5-xxl/) on an L4 instance to efficiently process embeddings.
1520
- [Inference Deployment](/ai-data/managed-inference/how-to/create-deployment/) with the model of your choice.
1621
- [Object Storage Bucket](/storage/object/how-to/create-a-bucket/) to store all the data you want to inject into your LLM model.
17-
- [Managed Database](/managed-databases/postgresql-and-mysql/how-to/create-a-database/) to securely store all your embeddings.
22+
- [Managed Database](/managed-databases/postgresql-and-mysql/how-to/create-a-database/) to securely store all your embeddings.
23+
24+
## Configure your developement environnement
25+
1. Install necessary packages: run the following command to install the required packages:
26+
```sh
27+
pip install langchain psycopg2 python-dotenv scaleway
28+
```
29+
2. Configure your environnement variables: create a .env file and add the following variables. These will store your API keys, database connection details, and other configuration values.
30+
```sh
31+
# .env file
32+
33+
# Scaleway API credentials
34+
SCW_ACCESS_KEY=your_scaleway_access_key
35+
SCW_SECRET_KEY=your_scaleway_secret_key
36+
SCW_API_KEY=your_scaleway_api_key
37+
38+
# Scaleway project and region
39+
SCW_DEFAULT_PROJECT_ID=your_scaleway_project_id
40+
SCW_DEFAULT_REGION=your_scaleway_region
41+
42+
# Scaleway managed database (PostgreSQL) credentials
43+
SCW_DB_NAME=your_scaleway_managed_db_name
44+
SCW_DB_USER=your_scaleway_managed_db_username
45+
SCW_DB_PASSWORD=your_scaleway_managed_db_password
46+
SCW_DB_HOST=your_scaleway_managed_db_host # The IP address of your database instance
47+
SCW_DB_PORT=your_scaleway_managed_db_port # The port number for your database instance
48+
49+
# Scaleway S3 bucket configuration
50+
SCW_BUCKET_NAME=your_scaleway_bucket_name
51+
SCW_BUCKET_ENDPOINT=your_scaleway_bucket_endpoint # S3 endpoint, e.g., https://s3.fr-par.scw.cloud
52+
53+
# Scaleway Inference API configuration (Embeddings)
54+
SCW_INFERENCE_EMBEDDINGS_ENDPOINT=your_scaleway_inference_embeddings_endpoint # Endpoint for sentence-transformers/sentence-t5-xxl deployment
55+
SCW_INFERENCE_API_KEY_EMBEDDINGS=your_scaleway_api_key_for_embeddings
56+
57+
# Scaleway Inference API configuration (LLM deployment)
58+
SCW_INFERENCE_DEPLOYMENT_ENDPOINT=your_scaleway_inference_endpoint # Endpoint for your LLM deployment
59+
SCW_INFERENCE_API_KEY=your_scaleway_api_key_for_inference_deployment
60+
```

0 commit comments

Comments
 (0)