beamds.beam.llm package#

Subpackages#

Submodules#

beamds.beam.llm.chat module#

beamds.beam.llm.conversation module#

Copied from: https://raw.githubusercontent.com/lm-sys/FastChat/main/fastchat/conversation.py

Conversation prompt templates.

We kindly request that you import fastchat instead of copying this file if you wish to use it. If you have any changes in mind, please contribute back so the community can benefit collectively and continue to maintain these valuable templates.

class beamds.beam.llm.conversation.Conversation(name: str, system_template: str = '{system_message}', system_message: str = '', roles: Tuple[str] = ('USER', 'ASSISTANT'), messages: List[List[str]] = (), offset: int = 0, sep_style: SeparatorStyle = SeparatorStyle.ADD_COLON_SINGLE, sep: str = '\n', sep2: str = None, stop_str: str | List[str] = None, stop_token_ids: List[int] = None, max_image_size_mb: int = None)[source]#

Bases: object

A class that manages prompt templates and keeps all conversation history.

append_message(role: str, message: str)[source]#

Append a new message.

convert_image_to_base64(image)[source]#

Given an image, return the base64 encoded image string.

copy()[source]#
dict()[source]#
extract_text_and_image_hashes_from_messages()[source]#
get_images()[source]#
get_prompt() str[source]#

Get the prompt for generation.

get_system_message()[source]#

return the system message.

max_image_size_mb: int = None#
messages: List[List[str]] = ()#
name: str#
offset: int = 0#
roles: Tuple[str] = ('USER', 'ASSISTANT')#
save_new_images(has_csam_images=False, use_remote_storage=False)[source]#
sep: str = '\n'#
sep2: str = None#
sep_style: SeparatorStyle = 1#
set_system_message(system_message: str)[source]#

Set the system message.

stop_str: str | List[str] = None#
stop_token_ids: List[int] = None#
system_message: str = ''#
system_template: str = '{system_message}'#
to_anthropic_vision_api_messages()[source]#

Convert the conversation to Claude-3 Messages Vision API format

to_gemini_api_messages()[source]#
to_gradio_chatbot()[source]#

Convert the conversation to gradio chatbot format.

to_openai_api_messages()[source]#

Convert the conversation to OpenAI chat completion format.

to_openai_image_format(image_urls)[source]#
to_openai_vision_api_messages()[source]#

Convert the conversation to OpenAI vision api completion format

to_reka_api_messages()[source]#
to_vertex_api_messages()[source]#
update_last_message(message: str)[source]#

Update the last output.

The last message is typically set to be None when constructing the prompt, so we need to update it in-place after getting the response from a model.

class beamds.beam.llm.conversation.SeparatorStyle(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)[source]#

Bases: IntEnum

Separator styles.

ADD_COLON_SINGLE = 1#
ADD_COLON_SPACE_SINGLE = 3#
ADD_COLON_TWO = 2#
ADD_NEW_LINE_SINGLE = 6#
CHATGLM = 9#
CHATGLM3 = 17#
CHATINTERN = 11#
CHATML = 10#
CLLM = 22#
DEEPSEEK_CHAT = 18#
DEFAULT = 23#
DOLLY = 12#
FALCON_CHAT = 16#
GEMMA = 21#
LLAMA2 = 7#
LLAMA3 = 8#
METAMATH = 19#
NO_COLON_SINGLE = 4#
NO_COLON_TWO = 5#
PHOENIX = 14#
ROBIN = 15#
RWKV = 13#
YUAN2 = 20#
beamds.beam.llm.conversation.get_conv_template(name: str) Conversation[source]#

Get a conversation template.

beamds.beam.llm.conversation.register_conv_template(template: Conversation, override: bool = False)[source]#

Register a new conversation template.

beamds.beam.llm.core module#

beamds.beam.llm.hf_conversation module#

class beamds.beam.llm.hf_conversation.Conversation(messages: str | List[Dict[str, str]] = None, conversation_id: UUID = None, **deprecated_kwargs)[source]#

Bases: object

Utility class containing a conversation and its history. This class is meant to be used as an input to the [ConversationalPipeline]. The conversation contains several utility functions to manage the addition of new user inputs and generated model responses.

Parameters:
  • messages (Union[str, List[Dict[str, str]]], optional) – The initial messages to start the conversation, either a string, or a list of dicts containing “role” and “content” keys. If a string is passed, it is interpreted as a single message with the “user” role.

  • conversation_id (uuid.UUID, optional) – Unique identifier for the conversation. If not provided, a random UUID4 id will be assigned to the conversation.

Usage:

`python conversation = Conversation("Going to the movies tonight - any suggestions?") conversation.add_message({"role": "assistant", "content": "The Big lebowski."}) conversation.add_message({"role": "user", "content": "Is it good?"}) `

add_message(message: Dict[str, str])[source]#
add_user_input(text: str, overwrite: bool = False)[source]#

Add a user input to the conversation for the next round. This is a legacy method that assumes that inputs must alternate user/assistant/user/assistant, and so will not add multiple user messages in succession. We recommend just using add_message with role “user” instead.

append_response(response: str)[source]#

This is a legacy method. We recommend just using add_message with an appropriate role instead.

property generated_responses#
iter_texts()[source]#
mark_processed()[source]#

This is a legacy method, as the Conversation no longer distinguishes between processed and unprocessed user input. We set a counter here to keep behaviour mostly backward-compatible, but in general you should just read the messages directly when writing new code.

property new_user_input#
property past_user_inputs#

beamds.beam.llm.model_adapter module#

Copied from: https://raw.githubusercontent.com/lm-sys/FastChat/main/fastchat/model/model_adapter.py removed the transformer dependency and add local conversation dependency remove functions: get_generate_stream_function, load_model remove imports: psutils, math, torch

Model adapter registration.

class beamds.beam.llm.model_adapter.AiroborosAdapter[source]#

Bases: BaseModelAdapter

The model adapter for jondurbin/airoboros-*

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.AlpacaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Alpaca

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.AquilaChatAdapter[source]#

Bases: BaseModelAdapter

The model adapter for BAAI/Aquila

Now supports: - BAAI/AquilaChat-7B - BAAI/AquilaChat2-7B - BAAI/AquilaChat2-34B

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.AzureOpenAIAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Azure OpenAI

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.BGEAdapter[source]#

Bases: BaseModelAdapter

The model adapter for BGE (e.g., BAAI/bge-large-en-v1.5)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.BagelAdapter[source]#

Bases: BaseModelAdapter

Model adapter for jondurbin/bagel-* models

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.BaichuanAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Baichuan models (e.g., baichuan-inc/Baichuan-7B)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.BaizeAdapter[source]#

Bases: BaseModelAdapter

The model adapter for project-baize/baize-v2-7b

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.BardAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Bard

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.BaseModelAdapter[source]#

Bases: object

The base and the default model adapter.

get_default_conv_template(model_path: str) Conversation[source]#
load_compress_model(model_path, device, torch_dtype, revision='main')[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = True#
class beamds.beam.llm.model_adapter.BiLLaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Neutralzz/BiLLa-7B-SFT

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.CamelAdapter[source]#

Bases: BaseModelAdapter

The model adapter for camel-ai/CAMEL-13B-Combined-Data

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.CatPPTAdapter[source]#

Bases: BaseModelAdapter

The model adapter for CatPPT (e.g. rishiraj/CatPPT)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.ChangGPTAdapter[source]#

Bases: BaseModelAdapter

The model adapter for lcw99/polyglot-ko-12.8b-chang-instruct-chat

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.ChatGLMAdapter[source]#

Bases: BaseModelAdapter

The model adapter for THUDM/chatglm-6b, THUDM/chatglm2-6b

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.ChatGPTAdapter[source]#

Bases: BaseModelAdapter

The model adapter for ChatGPT

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.ClaudeAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Claude

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.CllmAdapter[source]#

Bases: BaseModelAdapter

The model adapter for CLLM

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.CodeGeexAdapter[source]#

Bases: BaseModelAdapter

The model adapter for THUDM/codegeex-6b, THUDM/codegeex2-6b

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.CodeLlamaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for CodeLlama (e.g., codellama/CodeLlama-34b-hf)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.CohereAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Cohere

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.CuteGPTAdapter[source]#

Bases: BaseModelAdapter

The model adapter for CuteGPT

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.DBRXAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Cohere

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.DeepseekChatAdapter[source]#

Bases: BaseModelAdapter

The model adapter for deepseek-ai’s chat models

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.DeepseekCoderAdapter[source]#

Bases: BaseModelAdapter

The model adapter for deepseek-ai’s coder models

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.DollyV2Adapter[source]#

Bases: BaseModelAdapter

The model adapter for databricks/dolly-v2-12b

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.DolphinAdapter[source]#

Bases: OpenOrcaAdapter

Model adapter for ehartford/dolphin-2.2.1-mistral-7b

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.E5Adapter[source]#

Bases: BaseModelAdapter

The model adapter for E5 (e.g., intfloat/e5-large-v2)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.FalconAdapter[source]#

Bases: BaseModelAdapter

The model adapter for tiiuae/falcon-40b

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.FalconChatAdapter[source]#

Bases: BaseModelAdapter

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.GeminiAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Gemini

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.GeminiDevAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Gemini 1.5 Pro

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.GemmaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for google/gemma

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.GoogleT5Adapter[source]#

Bases: BaseModelAdapter

The model adapter for google/Flan based models, such as Salesforce/codet5p-6b, lmsys/fastchat-t5-3b-v1.0, flan-t5-*, flan-ul2

load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.GuanacoAdapter[source]#

Bases: BaseModelAdapter

The model adapter for timdettmers/guanaco-33b-merged

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.H2OGPTAdapter[source]#

Bases: BaseModelAdapter

The model adapter for h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.Hermes2Adapter[source]#

Bases: BaseModelAdapter

Model adapter for teknium/OpenHermes-2.5-Mistral-7B and teknium/OpenHermes-2-Mistral-7B models

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.InternLMChatAdapter[source]#

Bases: BaseModelAdapter

The model adapter for internlm/internlm-chat-7b

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.KoalaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Koala

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.Lamma2ChineseAdapter[source]#

Bases: BaseModelAdapter

The model adapter for FlagAlpha/LLama2-Chinese sft

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.Lamma2ChineseAlpacaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for ymcui/Chinese-LLaMA-Alpaca sft

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.LemurAdapter[source]#

Bases: BaseModelAdapter

The model adapter for OpenLemur/lemur-70b-chat-v1

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.Llama2Adapter[source]#

Bases: BaseModelAdapter

The model adapter for Llama-2 (e.g., meta-llama/Llama-2-7b-hf)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.Llama2ChangAdapter[source]#

Bases: Llama2Adapter

The model adapter for Llama2-ko-chang (e.g., lcw99/llama2-ko-chang-instruct-chat)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.Llama3Adapter[source]#

Bases: BaseModelAdapter

The model adapter for Llama-3 (e.g., meta-llama/Meta-Llama-3-8B-Instruct)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.LlavaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for liuhaotian/llava-v1.5 series of models

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.LongChatAdapter[source]#

Bases: BaseModelAdapter

Model adapter for LongChat models (e.g., lmsys/longchat-7b-16k).

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.MPTAdapter[source]#

Bases: BaseModelAdapter

The model adapter for MPT series (mosaicml/mpt-7b-chat, mosaicml/mpt-30b-chat)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.ManticoreAdapter[source]#

Bases: BaseModelAdapter

The model adapter for openaccess-ai-collective/manticore-13b-chat-pyg

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.MetaMathAdapter[source]#

Bases: BaseModelAdapter

The model adapter for MetaMath models

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.MicrosoftOrcaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Microsoft/Orca-2 series of models (e.g. Microsoft/Orca-2-7b, Microsoft/Orca-2-13b)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.MistralAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Mistral AI models

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.NotusAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Notus (e.g. argilla/notus-7b-v1)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.NousHermes2MixtralAdapter[source]#

Bases: BaseModelAdapter

Model adapter for NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO model

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.NousHermesAdapter[source]#

Bases: BaseModelAdapter

The model adapter for NousResearch/Nous-Hermes-13b

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.OasstLLaMAAdapter[source]#

Bases: BaseModelAdapter

The model adapter for OpenAssistant/oasst-sft-7-llama-30b

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.OasstPythiaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.OlmoAdapter[source]#

Bases: BaseModelAdapter

The model adapter for allenai/OLMo-7B-Instruct

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.OpenBuddyAdapter[source]#

Bases: BaseModelAdapter

The model adapter for OpenBuddy/openbuddy-7b-v1.1-bf16-enc

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.OpenChat35Adapter[source]#

Bases: BaseModelAdapter

The model adapter for OpenChat 3.5 (e.g. openchat/openchat_3.5)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.OpenLLaMaOpenInstructAdapter[source]#

Bases: BaseModelAdapter

The model adapter for OpenLLaMa-Open-Instruct (e.g., VMware/open-llama-7b-open-instruct)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.OpenOrcaAdapter[source]#

Bases: BaseModelAdapter

Model adapter for Open-Orca models which may use different prompt templates - (e.g. Open-Orca/OpenOrcaxOpenChat-Preview2-13B, Open-Orca/Mistral-7B-OpenOrca) - OpenOrcaxOpenChat-Preview2-13B uses their “OpenChat Llama2 V1” prompt template.

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.PaLM2Adapter[source]#

Bases: BaseModelAdapter

The model adapter for PaLM2

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.PeftModelAdapter[source]#

Bases: object

Loads any “peft” model and it’s base model.

get_default_conv_template(model_path: str) Conversation[source]#

Uses the conv template of the base model

load_model(model_path: str, from_pretrained_kwargs: dict)[source]#

Loads the base model then the (peft) adapter weights

match(model_path: str)[source]#

Accepts any model path with “peft” in the name

class beamds.beam.llm.model_adapter.PhindCodeLlamaAdapter[source]#

Bases: CodeLlamaAdapter

The model adapter for Phind-CodeLlama (e.g., Phind/Phind-CodeLlama-34B-v2)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.PhoenixAdapter[source]#

Bases: BaseModelAdapter

The model adapter for FreedomIntelligence/phoenix-inst-chat-7b

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.PplxAIAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Perplexity AI

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.PygmalionAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Pygmalion/Metharme series of models(e.g., PygmalionAI/mythalion-13b)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.PythiaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for any EleutherAI/pythia model

load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.QwenChatAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Qwen/Qwen-7B-Chat To run this model, you need to ensure additional flash attention installation: ` bash git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . pip install csrc/layer_norm pip install csrc/rotary `

Since from 2.0, the following change happened - flash_attn_unpadded_func -> flash_attn_varlen_func - flash_attn_unpadded_qkvpacked_func -> flash_attn_varlen_qkvpacked_func - flash_attn_unpadded_kvpacked_func -> flash_attn_varlen_kvpacked_func You may need to revise the code in: https://huggingface.co/Qwen/Qwen-7B-Chat/blob/main/modeling_qwen.py#L69 to from flash_attn.flash_attn_interface import flash_attn_varlen_func as flash_attn_unpadded_func

float_set(config, option)[source]#
get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.ReaLMAdapter[source]#

Bases: BaseModelAdapter

The model adapter for FreedomIntelligence/ReaLM-7b

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.RedPajamaINCITEAdapter[source]#

Bases: BaseModelAdapter

The model adapter for togethercomputer/RedPajama-INCITE-7B-Chat

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.RekaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Reka

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.RobinAdapter[source]#

Bases: BaseModelAdapter

The model adapter for LMFlow/Full-Robin-7b-v2

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.RwkvAdapter[source]#

Bases: BaseModelAdapter

The model adapter for BlinkDL/RWKV-4-Raven

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.SmaugChatAdapter[source]#

Bases: BaseModelAdapter

The model adapter for abacusai/Smaug-2-72B.

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.SnoozyAdapter[source]#

Bases: BaseModelAdapter

The model adapter for nomic-ai/gpt4all-13b-snoozy

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.SolarAdapter[source]#

Bases: BaseModelAdapter

The model adapter for upstage/SOLAR-10.7B-Instruct-v1.0

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.StableLMAdapter[source]#

Bases: BaseModelAdapter

The model adapter for StabilityAI/stablelm-tuned-alpha-7b

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.StableVicunaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for StableVicuna

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.StarChatAdapter[source]#

Bases: BaseModelAdapter

The model adapter for HuggingFaceH4/starchat-beta

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.SteerLMAdapter[source]#

Bases: BaseModelAdapter

The model adapter for nvidia/Llama2-70B-SteerLM-Chat

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.TenyxChatAdapter[source]#

Bases: BaseModelAdapter

The model adapter for TenyxChat (e.g. tenyx/TenyxChat-7B-v1)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.TigerBotAdapter[source]#

Bases: BaseModelAdapter

The model adapter for TigerResearch/tigerbot-7b-sft

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.TinyLlamaAdapter[source]#

Bases: BaseModelAdapter

The model adapter for TinyLlama (e.g. TinyLlama/TinyLlama-1.1B-Chat-v1.0)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.TuluAdapter[source]#

Bases: BaseModelAdapter

The model adapter for allenai/tulu-30b

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.VicunaAdapter[source]#

Bases: BaseModelAdapter

Model adapter for Vicuna models (e.g., lmsys/vicuna-7b-v1.5)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
raise_warning_for_old_weights(model)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.VigogneAdapter[source]#

Bases: BaseModelAdapter

The model adapter for vigogne (e.g., bofenghuang/vigogne-2-7b-chat)

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.WizardCoderAdapter[source]#

Bases: BaseModelAdapter

The model adapter for WizardCoder (e.g., WizardLM/WizardCoder-Python-34B-V1.0)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.WizardLMAdapter[source]#

Bases: BaseModelAdapter

The model adapter for WizardLM/WizardLM-13B-V1.0

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
use_fast_tokenizer = False#
class beamds.beam.llm.model_adapter.XGenAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Salesforce/xgen-7b

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.XdanAdapter[source]#

Bases: BaseModelAdapter

The model adapter for xDAN-AI (e.g. xDAN-AI/xDAN-L1-Chat-RL-v1)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.XwinLMAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Xwin-LM V0.1 and V0.2 series of models(e.g., Xwin-LM/Xwin-LM-70B-V0.1)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.YandexGPTAdapter[source]#

Bases: BaseModelAdapter

The model adapter for YandexGPT

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.YiAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Yi models

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.Yuan2Adapter[source]#

Bases: BaseModelAdapter

The model adapter for Yuan2.0

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.YuanAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Yuan

get_default_conv_template(model_path: str) Conversation[source]#
load_model(model_path: str, from_pretrained_kwargs: dict)[source]#
match(model_path: str)[source]#
class beamds.beam.llm.model_adapter.ZephyrAdapter[source]#

Bases: BaseModelAdapter

The model adapter for Zephyr (e.g. HuggingFaceH4/zephyr-7b-alpha)

get_default_conv_template(model_path: str) Conversation[source]#
match(model_path: str)[source]#
beamds.beam.llm.model_adapter.add_model_args(parser)[source]#
beamds.beam.llm.model_adapter.get_conversation_template(model_path: str) Conversation[source]#

Get the default conversation template.

beamds.beam.llm.model_adapter.get_model_adapter(model_path: str) BaseModelAdapter[source]#

Get a model adapter for a model_path.

beamds.beam.llm.model_adapter.raise_warning_for_incompatible_cpu_offloading_configuration(device: str, load_8bit: bool, cpu_offloading: bool)[source]#
beamds.beam.llm.model_adapter.register_model_adapter(cls)[source]#

Register a model adapter.

beamds.beam.llm.model_adapter.remove_parent_directory_name(model_path)[source]#

Remove parent directory name.

beamds.beam.llm.models module#

beamds.beam.llm.openai module#

beamds.beam.llm.resource module#

beamds.beam.llm.response module#

class beamds.beam.llm.response.LLMResponse(response, llm, prompt=None, prompt_kwargs=None, chat=False, stream=False, parse_retries=3, sleep=1, prompt_type='completion', verify=True, **kwargs)[source]#

Bases: object

add_task_result(task_result, success=True)[source]#
property bool#
property choices#
property csv#
property float#
property html#
property int#
property json#
property judge#
property openai_format#
parse(protocol='json')[source]#
parse_text(text, protocol='json')[source]#
property prompt#
property prompt_kwargs#
property task_result#
property task_success#
property text#
property toml#
verify()[source]#
property xml#
property yaml#

beamds.beam.llm.task module#

beamds.beam.llm.tools module#

beamds.beam.llm.utils module#

beamds.beam.llm.utils.default_tokenizer(text)[source]#
beamds.beam.llm.utils.estimate_tokens(s)[source]#
beamds.beam.llm.utils.get_conversation_template(model_path)[source]#
beamds.beam.llm.utils.split_to_tokens(s)[source]#
beamds.beam.llm.utils.text_splitter(text, chunk_size=100, separators=['\n\n', '. ', ' '], length_function=None)[source]#

Module contents#