Instructions to use prithivMLmods/Eratosthenes-Polymath-14B-Instruct with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Eratosthenes-Polymath-14B-Instruct with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Eratosthenes-Polymath-14B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Eratosthenes-Polymath-14B-Instruct") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Eratosthenes-Polymath-14B-Instruct") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Eratosthenes-Polymath-14B-Instruct with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Eratosthenes-Polymath-14B-Instruct" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Eratosthenes-Polymath-14B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Eratosthenes-Polymath-14B-Instruct
- SGLang
How to use prithivMLmods/Eratosthenes-Polymath-14B-Instruct with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Eratosthenes-Polymath-14B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Eratosthenes-Polymath-14B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Eratosthenes-Polymath-14B-Instruct" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Eratosthenes-Polymath-14B-Instruct", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Eratosthenes-Polymath-14B-Instruct with Docker Model Runner:
docker model run hf.co/prithivMLmods/Eratosthenes-Polymath-14B-Instruct
Eratosthenes-Polymath-14B-Instruct
Eratosthenes-Polymath-14B-Instruct is built on the Qwen 2.5 14B modality architecture, engineered to excel in mathematical reasoning, distributed reinforcement learning (RL), and general-purpose problem solving. This model is fine-tuned with chain-of-thought reasoning datasets, optimization-focused corpora, and advanced structured reasoning datasets to maximize its capabilities in logical deduction, multi-step reasoning, and intelligent decision-making.
Key Improvements
Advanced Mathematical Reasoning:
Excels in solving complex equations, performing symbolic computation, theorem proving, and step-by-step mathematical problem-solving.Distributed Reinforcement Learning Expertise:
Specially fine-tuned for robust policy optimization using distributed RL techniques, providing resilience and optimality across dynamic problem spaces.General-Purpose Reasoning and Problem Solving:
Strong across a broad range of domains, handling factual questions, logical analysis, and multi-step cognitive tasks.Long-Context Mastery:
Supports up to 128K tokens for context and can generate up to 8K tokens, enabling detailed, coherent long-form outputs and complex derivations.Superior Instruction Following:
Capable of following complex and structured prompts precisely, maintaining focus and clarity over extended dialogues.Coding and Algorithmic Fluency:
Highly effective in code generation, debugging, algorithm design, and optimization problem modeling across various programming languages.
Quickstart with transformers
Use the model easily with the transformers library and apply_chat_template:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Eratosthenes-Polymath-14B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the connection between distributed reinforcement learning and robust policy optimization."
messages = [
{"role": "system", "content": "You are an expert assistant specializing in mathematics, optimization, and reinforcement learning."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Intended Use
Mathematical and Optimization Problem Solving:
Designed for solving complex mathematical problems, optimization modeling, symbolic logic, and structured derivations.Distributed Reinforcement Learning Research:
Supports designing, analyzing, and explaining distributed RL systems, robust policy optimization, and autonomous decision systems.General Knowledge and Reasoning:
Effective in answering a wide range of questions and performing structured reasoning across scientific, technical, and educational domains.Educational and Research Support:
Ideal for students, researchers, and professionals seeking detailed explanations, derivations, and robust scientific insights.Code Writing and Algorithm Design:
Excels at creating, optimizing, and explaining algorithms, particularly those relevant to mathematical computation and optimization.Intelligent Conversational Systems:
Perfect for technical conversational agents and educational bots requiring deep understanding and detailed reasoning capabilities.Long-Form Technical Content Generation:
Capable of producing structured, coherent articles, tutorials, and research papers, especially in technical and mathematical fields.Structured Data Generation:
Supports outputting structured formats such as proofs, equations, tables, and JSON useful for scientific and technical workflows.
Limitations
Heavy Hardware Requirements:
Due to its large parameter count and long-context handling, it requires powerful GPUs or TPUs with significant memory.Potential for Training Biases:
Outputs may still reflect biases from the mathematical, technical, or optimization-specific datasets used during training.Less Effective in Creative Tasks:
Focused more on technical and logical reasoning than on freeform creative writing or storytelling.No Real-Time Event Awareness:
Limited to knowledge prior to its training cutoff, without access to live or real-world updates.Prompt Sensitivity:
Performance may vary based on the clarity, structure, and specificity of the prompt, particularly for complex multi-step tasks.Error Propagation Risk:
Small inaccuracies in early stages of long-form outputs could propagate, affecting the overall answer coherence.
- Downloads last month
- 4
