Constantsยง
- JSON_
BEGIN_ ๐MARKER - JSON markers for LLM prompt
- JSON_
END_ ๐MARKER - MAX_
CONTEXT_ UTILIZATION - Maximum percentage of context window to use in a single request
- MAX_
CONTEXT_ WINDOW - Maximum context window size for LLM in tokens
- REQUEST_
TEMPERATURE - Temperature for requests, low for deterministic results
- SYSTEM_
PROMPT ๐ - System prompt for converting course material to markdown
- USER_
PROMPT_ ๐START - User prompt for converting course material to markdown
Functionsยง
- append_
markdown_ with_ separator - Appends markdown content to a result string with proper newline separators
- calculate_
safe_ token_ limit - Calculate the safe token limit based on context window and utilization
- convert_
material_ blocks_ to_ markdown_ with_ llm - Cleans content by converting the material blocks to clean markdown using an LLM
- prepare_
llm_ messages - Prepare messages for the LLM request
- process_
block_ ๐chunk - Process a subset of blocks in a single LLM request
- process_
chunks ๐ - Process all chunks and combine the results
- split_
blocks_ into_ chunks - Split blocks into chunks that fit within token limits
- split_
oversized_ ๐block - Splits an oversized block into smaller string chunks