Skip to content

AI-Code-Generator/server-v1

Repository files navigation

docker run command

docker run -d -p 8000:8000 -e GEMINI_API_KEY=gemini_api_key_here -e PINECONE_API_KEY=pinecone_api_key_here -e PINECONE_ENVIRONMENT=us-east-1-aws -e PINECONE_INDEX_NAME=conversation-history <IMAGE-ID>

curl -X POST "http://<IP>:8000/ask-ai" -H "Content-Type: application/json" -d '{\"query\": \"<QUESTION>\"}'

curl -X POST "http://<IP>:8000/ask-ai" -H "Content-Type: application/json" -d "{\"query\": \"<QUESTION>\"}"

curl -X POST "http://<IP>:8000/ask-ai" -H "Content-Type: application/json" -d "{\"query\": \"<QUESTION>\", \"user_ID\": \"<USER_ID>\", \"context\": [\"<CONTEXT>\"]}"

About

This repository is designed to obtain a response from the Gemini LLM by accessing an endpoint that incorporates Python.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors