Code-Flow Training
Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes.
State-of-the-art open-source code LLM for autonomous software engineering. Built with the innovative Code-Flow training paradigm to understand real-world code evolution.
Advancing autonomous software engineering with innovative training paradigms and efficient architectures.
Moving beyond static code representations, our models learn from repository evolution patterns, commit transitions, and dynamic code transformations to understand real-world software development processes.
Bifurcated post-training delivers two specialized variants: Thinking models with reasoning-driven RL for complex problem-solving, and Instruct models optimized for general coding assistance.
The Loop variant introduces a recurrent mechanism with shared parameters across iterations, optimizing the trade-off between model capacity and deployment footprint.
All models natively support up to 128K tokens without requiring additional scaling techniques, enabling processing of entire codebases and multi-file contexts.
Choose from multiple model sizes with both Instruct and Thinking variants.
| Model | Parameters | Context | Variant | Link |
|---|---|---|---|---|
| IQuest-Coder-V1-7B-Instruct | 7B | 128K | Instruct | Hugging Face |
| IQuest-Coder-V1-14B-Instruct | 14B | 128K | Instruct | Hugging Face |
| IQuest-Coder-V1-40B-Instruct | 40B | 128K | Instruct | Hugging Face |
| IQuest-Coder-V1-40B-Thinking | 40B | 128K | Thinking | Hugging Face |
| IQuest-Coder-V1-40B-Loop-Instruct | 40B | 128K | Loop | Hugging Face |
Get started with IQuest Coder using Hugging Face Transformers.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "IQuestLab/IQuest-Coder-V1-40B-Instruct"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# Prepare the input
prompt = "Write a Python function to calculate Fibonacci sequence."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate response
output = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Peer-reviewed research underpinning IQuest Coder's innovations.