bingchain

๐Ÿ•ต๏ธ๐Ÿ”— BingChain

This is an evolution of langchain-mini, a very simple re-implementation of LangChain, in ~350 lines of core code. In essence, it is a multi-model LLM-powered chat application that is able to use tools (Microsoft Bing search, URL retrieval, API plugin installation, API calls, a Javascript sandbox, JsFiddle creation, image and video preview, and a scientific calculator, as well as meta-tools such as list, disable, reset and debug) in order to build a chain of thought to hold conversations and answer questions.

Hereโ€™s an example:

Q: What is the world record for solving a rubiks cube?
The world record for solving a Rubik's Cube is 4.69 seconds, held by Yiheng Wang (China).
Q: Can a robot solve it faster?
The fastest time a robot has solved a Rubik's Cube is 0.637 seconds.
Q: Who made this robot?
Infineon created the robot that solved a Rubik's Cube in 0.637 seconds.
Q: What time would an average human expect for solving?
It takes the average person about three hours to solve a Rubik's cube for the first time.

This is not intended to be a replacement for LangChain, which has many alternative and composable building blocks, instead it was built to demonstrate the power of assembling a set of tools (such as API calling and Javascript execution). If youโ€™re interested in how LangChain, and similar tools work, this is a very good starting point.

Running / developing

Install dependencies, and run (with node >= v18):

% npm install

To display videos in the terminal, you will need to install ffmpeg.

Youโ€™ll need to have an OpenAI API key, and optionally a Bing Search API key. These can be supplied to the application via a .env file:

OPENAI_API_KEY="..."
BING_API_KEY="..."
MODEL=gpt-4
TOKEN_LIMIT=32768
TEMPERATURE=0.25
RESPONSE_LIMIT=512
PORT=1337
GUI=1
#LANG=Ukrainian
#DEBUG=2
#SEED_QUERIES=1
#PROMPT_OVERRIDE=Riddle me this! ${question}

You can also set PROVIDER=anthropic (with a relevant ANTHROPIC_API_KEY, MODEL and TOKEN_LIMIT) to use an alternative LLM/API provider.

Set the token limit to the advertised limit of the model you are using, so 32768 for gpt-4, 4096 for text-davinci-003 and 2048 for text-curie-001.

The clever part is the default initial prompt, which is held in prompt.txt, unless overridden by the PROMPT_OVERRIDE environment variable.

Example prompts and responses to show how the various built-in tools work can be found in the examples directory. The tools themselves are defined in lib/tools.mjs, including the description properties which act as further prompts to the LLM to suggest when and how the tools should be used.

There are a few Javascript and CSS files scattered about from jsfiddle.net to make the savetext, savehtml and savecode tools work locally.

Note: to enable the Javascript sandbox, you must pass the option --experimental-vm-modules to Node.js. The included go.sh script sets the Node.js recommended options.

Start-up

The application will display the built-in tools as it initialises them. Tool names followed by [1] are disabled by default for security reasons (i.e. they may access files on your local filesystem or your environment variables). You can enable them by typing enable [toolname] at the prompt. Tool names followed by [2] are disabled becuase you do not have the requisite API key in your environment or your version of Node.js does not support the required features.

Example dialogue

You can now run the chain:

% ./go.sh
How can I help? > what was the name of the first woman in space?

Calling search with first woman in space name

  1. Valentina Tereshkova - First Woman in Space - Biography
  2. **Valentina Tereshkova: First Woman in Space Space**
  3. The First Woman in Space: Valentina Tereshkova - ThoughtCo

Exiting the chain / vi mode

Authors

Future work planned