LLM Cost Calculation
Project description
OPEN AI API - PRICE CALCULATOR
Overview
This package is created to calculate cost of OPEN AI API usage.
Pricing based on following url : OPEN AI Pricing API. Source code: Github
Usage
Installation
Install Page
pip install openai-pricing-calc-draft
Without Surrounding Code
from lll_pricing_calculation import calculate_openai_pricing
# Without surrounding Code
costForThousandCurrency,embeddingsCost,promptCost,completionTokenCost,total_cost = calculate_openai_pricing("GPT-3.5 Turbo","4K context",token_counter.total_embedding_token_count,token_counter.prompt_llm_token_count,token_counter.completion_llm_token_count)
print("currency:"+costForThousandCurrency)
print("embeddingsCost:"+str(embeddingsCost))
print("promptCost:"+str(promptCost))
print("completionTokenCost:"+str(completionTokenCost))
print("total cost:"+str(total_cost))
With Surrounding Code Using Llama Index
import tiktoken
from llama_index.callbacks import CallbackManager, TokenCountingHandler
from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
from lll_pricing_calculation import calculate_openai_pricing
sampleQuery = "Sample Query"
token_counter = TokenCountingHandler(
tokenizer=tiktoken.encoding_for_model("text-davinci-003").encode,
verbose=False # set to true to see usage printed to the console
)
callback_manager = CallbackManager([token_counter])
service_context = ServiceContext.from_defaults(callback_manager=callback_manager)
def askQuestion(quest,storage,service_context,token_counter):
token_counter.reset_counts()
# index defined outside
specificindex = index.get_index(dataFolder,"./storage"+storage,service_context)
print(quest)
result = query.query_index(specificindex, quest,"./storage"+storage)
print(result)
# otherwise, you can access the count directly
print("Embeddings Token Counter stuff is below (total_embedding_token_count):")
print(token_counter.total_embedding_token_count)
print("Detailed ")
print('Embedding Tokens: ', token_counter.total_embedding_token_count, '\n',
'LLM Prompt Tokens: ', token_counter.prompt_llm_token_count, '\n',
'LLM Completion Tokens: ', token_counter.completion_llm_token_count, '\n',
'Total LLM Token Count: ', token_counter.total_llm_token_count)
# CALCULATE PRICING TAKES PLACE HERE
costForThousandCurrency,embeddingsCost,promptCost,completionTokenCost,total_cost = calculate_openai_pricing("GPT-3.5 Turbo","4K context",token_counter.total_embedding_token_count,token_counter.prompt_llm_token_count,token_counter.completion_llm_token_count)
print("currency:"+costForThousandCurrency)
print("embeddingsCost:"+str(embeddingsCost))
print("promptCost:"+str(promptCost))
print("completionTokenCost:"+str(completionTokenCost))
print("total cost:"+str(total_cost))
askQuestion(sampleQuery,"4",service_context,token_counter)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for openai_pricing_calc_draft-0.5.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | ab46ec2deda2320d1653b8391225ff48e0e77b5fba9a9904769b43e08660ef38 |
|
MD5 | 0bc3db341823081fd61d84cf29dbd958 |
|
BLAKE2b-256 | 57f0dfa2d1c14c1ce10e306b80a2685f28ed4d90aee0b2c606ec98a033545d5f |
Close
Hashes for openai_pricing_calc_draft-0.5.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 1c3ac117be69e81555248f54a1698d52e8c3d2fcdb45280b5e83acff7e7100b7 |
|
MD5 | 6329fca69cd63105cad16ede4fb907ac |
|
BLAKE2b-256 | 6ba46ac62f92c54ca2f7479547bd50cfd39f216a43f7355bb9c19e8c4a1e4d37 |