TL;DR

Generate your own assistant trained on your data with an interactive run on VESSL.

Description

The purpose of this library is to assist developers in creating powerful applications by combining large language models (LLMs) with other computational resources or knowledge. It provides support for various types of applications, including question answering, chatbots, and agents, offering documentation and end-to-end examples as a guide for implementation.

YAML

name : langchain
description: "Generate your own assistant trained on your data with an interactive run on VESSL."
resources:
  cluster: aws-apne2
  preset: v1.cpu-4.mem-13
image: quay.io/vessl-ai/kernels:py38-202303150331
run:
  - workdir: /root/examples/langchain/question_answering/
    command: |
      bash ./run.sh
import:
  /root/examples: git://github.com/vessl-ai/examples
interactive:
  max_runtime: 24h
  jupyter:
    idle_timeout: 120m
ports:
  - name: streamlit
    type: http
    port: 8501

Demo

Untitled