Ceci est une ancienne révision du document !
Table des matières
GPU Bench
Selon LeChat:
| Carte graphique | TOPS (INT8) | TOPS (FP16) | Architecture |
|---|---|---|---|
| RTX 3060 (12 Go) | ~120 TOPS | ~60 TOPS | Ampere |
| RTX 5060 Ti (16 Go) | ~759 TOPS | ~380 TOPS | Blackwell |
Bench llama.cpp :
- Text generation: tg128, tg256, tg512 :
-p 0 -n 128,256,512 - Prompt processing: b128, b256, b512 :
-p 1024 -n 0 -b 128,256,512
| models | test | tokens/seconds | ||
|---|---|---|---|---|
| i7-1360P | RTX 3060 | RTX 5060 Ti | ||
| Qwen2.5-coder-7b-instruct-q5_k_m | tg128 | 5.47 | 57.65 | 73.54 |
| size: 5.07 GiB | tg256 | … | 57.61 | 73.32 |
| tg512 | … | 56.20 | 71.80 | |
| b128 | … | 2840.57 | ||
| b256 | … | 3209.52 | ||
| b512 | … | 3271.22 | ||
| Qwen2.5-coder-7b-instruct-q8_0 | tg128 | … | 41.42 | 50.33 |
| size: 7.54 GiB | tg256 | … | 41.38 | 50.33 |
| tg512 | … | 40.70 | 49.62 | |
| b128 | 13.98 | 2972.52 | ||
| b256 | … | 3460.41 | ||
| b512 | … | 3511.29 | ||
| EuroLLM-9B-Instruct-Q4_0 | tg128 | … | 56.06 | 71.41 |
| size: 4.94 GiB | tg256 | … | 55.96 | 71.15 |
| tg512 | … | 53.87 | 69.45 | |
| b128 | … | CUDA error | ||
| b256 | … | … | ||
| b512 | … | … | ||
| Qwen3-14B-UD-Q5_K_XL | tg128 | … | 37.66 | |
| size: 9.82 GiB | tg256 | … | 38.17 | |
| tg512 | … | 37.30 | ||
| b128 | … | |||
| b256 | … | |||
| b512 | … | |||
Intel® Core™ i7-1360P 13th Gen
Pour comparaison …
Qwen2.5-coder-7b-instruct-q5_k_m:
./llama-bench -m ~/Data/AI_Models/Qwen2.5-coder-7b-instruct-q5_k_m.gguf -p 0 -n 128 load_backend: loaded RPC backend from /home/.../llama-b7109/libggml-rpc.so load_backend: loaded CPU backend from /home/.../llama-b7109/libggml-cpu-alderlake.so | model | size | params | backend | threads | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | ------: | --------------: | -------------------: | | qwen2 7B Q5_K - Medium | 5.07 GiB | 7.62 B | CPU | 4 | tg128 | 5.47 ± 0.72 |
Gigabyte Windforce OC 12GB Geforce RTX 3060
llama.cpp
version build: 3f3a4fb9c (7130) (master 2025-11-21) avec CUDA
Qwen2.5-coder-7b-instruct-q5_k_m
./build/bin/llama-bench -m ~/Data/AI_Models/Qwen2.5-coder-7b-instruct-q5_k_m.gguf -p 0 -n 128,256,512 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | qwen2 7B Q5_K - Medium | 5.07 GiB | 7.62 B | CUDA | 99 | tg128 | 57.65 ± 0.03 | | qwen2 7B Q5_K - Medium | 5.07 GiB | 7.62 B | CUDA | 99 | tg256 | 57.61 ± 0.03 | | qwen2 7B Q5_K - Medium | 5.07 GiB | 7.62 B | CUDA | 99 | tg512 | 56.24 ± 0.05 |
Qwen2.5-coder-7b-instruct-q8_0
./build/bin/llama-bench -m ~/Data/AI_Models/Qwen2.5-coder-7b-instruct-q8_0.gguf -p 0 -n 128,256,512 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | CUDA | 99 | tg128 | 41.42 ± 0.00 | | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | CUDA | 99 | tg256 | 41.38 ± 0.05 | | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | CUDA | 99 | tg512 | 40.70 ± 0.01 |
EuroLLM-9B-Instruct-Q4_0
./build/bin/llama-bench -m ~/Data/AI_Models/EuroLLM-9B-Instruct-Q4_0.gguf -p 0 -n 128,256,512 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | llama ?B Q4_0 | 4.94 GiB | 9.15 B | CUDA | 99 | tg128 | 56.06 ± 0.01 | | llama ?B Q4_0 | 4.94 GiB | 9.15 B | CUDA | 99 | tg256 | 55.96 ± 0.02 | | llama ?B Q4_0 | 4.94 GiB | 9.15 B | CUDA | 99 | tg512 | 53.87 ± 0.03 |
PNY OC 16 Go Geforce RTX 5060 Ti
Qwen2.5-coder-7b-instruct-q5_k_m
$ ./llama.cpp/build/bin/llama-bench -m ~/Data/AI_Models/Qwen2.5-coder-7b-instruct-q5_k_m.gguf -p 0 -n 128,256,512 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | qwen2 7B Q5_K - Medium | 5.07 GiB | 7.62 B | CUDA | 99 | tg128 | 73.54 ± 0.01 | | qwen2 7B Q5_K - Medium | 5.07 GiB | 7.62 B | CUDA | 99 | tg256 | 73.32 ± 0.40 | | qwen2 7B Q5_K - Medium | 5.07 GiB | 7.62 B | CUDA | 99 | tg512 | 71.80 ± 0.61 | build: 3f3a4fb9c (7130)
Qwen2.5-coder-7b-instruct-q8_0
$ ~/Code/bronx/AI_Coding/llama.cpp/build/bin/llama-bench -m ~/Data/AI_Models/Qwen2.5-coder-7b-instruct-q8_0.gguf -p 0 -n 128,256,512 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | CUDA | 99 | tg128 | 50.33 ± 0.01 | | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | CUDA | 99 | tg256 | 50.33 ± 0.01 | | qwen2 7B Q8_0 | 7.54 GiB | 7.62 B | CUDA | 99 | tg512 | 49.62 ± 0.02 | build: 3f3a4fb9c (7130)
EuroLLM-9B-Instruct-Q4_0
$ ./llama.cpp/build/bin/llama-bench -m ~/Data/AI_Models/EuroLLM-9B-Instruct-Q4_0.gguf -p 0 -n 128,256,512 ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 1 CUDA devices: Device 0: NVIDIA GeForce RTX 5060 Ti, compute capability 12.0, VMM: yes | model | size | params | backend | ngl | test | t/s | | ------------------------------ | ---------: | ---------: | ---------- | --: | --------------: | -------------------: | | llama ?B Q4_0 | 4.94 GiB | 9.15 B | CUDA | 99 | tg128 | 71.41 ± 0.05 | | llama ?B Q4_0 | 4.94 GiB | 9.15 B | CUDA | 99 | tg256 | 71.15 ± 0.60 | | llama ?B Q4_0 | 4.94 GiB | 9.15 B | CUDA | 99 | tg512 | 69.45 ± 0.08 | build: 3f3a4fb9c (7130)
Traduction
1 phrase
Très rapide avec RTX 3060.
./build/bin/llama-cli -m ~/Data/AI_Models/EuroLLM-9B-Instruct-Q4_0.gguf -i
Prompt:
french to english “C'est une occasion rare pour les élèves de pouvoir échanger sur les parcours de nos anciens élèves et ainsi, de répondre à de nombreuses questions voire à lever des freins.”
Replies:
EuroLLM-9B-Instruct:
“This is a rare opportunity for students to be able to discuss the career paths of our former students and, in doing so, to answer many questions and even remove some of the barriers they may be facing.”
EuroLLM-9B-q8_0:
The provided text is in French and needs to be translated into English.
I'll translate it step-by-step:
1. Identify the source language: The text is in French.
2. Analyze the text: The text appears to be a sentence describing an event or opportunity. It mentions “élèves” (students), “anciens élèves” (former students), and “lever des freins” (lifting barriers or obstacles).
3. Translate each part:
* “C'est une occasion rare pour les élèves de pouvoir échanger sur les parcours de nos anciens élèves” translates to “This is a rare opportunity for students to be able to discuss the journeys of our former students.”
* “et ainsi, de répondre à de nombreuses questions voire à lever des freins” translates to “and thus, to answer many questions and even to lift barriers.”
4. Combine the translated parts: The complete translation is: “This is a rare opportunity for students to be able to discuss the journeys of our former students and thus, to answer many questions and even to lift barriers.”
Qwen2.5-coder-7b-instruct-q5_k_m :
Here's the English translation:
“It's a rare opportunity for students to exchange on the paths of our former students, thus being able to answer many questions or even overcome obstacles.”
This translation conveys the meaning that the situation being described is unusual and provides students with a valuable learning experience by allowing them to share and discuss the career paths of alumni, which can help them answer questions and potentially resolve challenges they may face in their own professional development.
Qwen2.5-coder-7b-instruct-q8_0:
Here's the English translation of the French sentence:
“This is a rare opportunity for students to exchange about the paths of our former students, and thus, to answer many questions or even to overcome obstacles.”
This translation preserves the meaning and tone of the original French statement, highlighting the value of this unique educational experience.
Qwen3-14B-UD-Q5_K_XL:
<think>
Okay, let's tackle this French to English translation. The user provided the sentence: “C'est une occasion rare pour les élèves de pouvoir échanger sur les parcours de nos anciens élèves et ainsi, de répondre à de nombreuses questions voire à lever des freins.”
First, I need to understand the structure and meaning. The sentence starts with “C'est une occasion rare” which translates to “It's a rare opportunity.” The subject is “les élèves” (the students), and the verb is “pouvoir échanger” (to be able to exchange). The object of the exchange is “les parcours de nos anciens élèves” (the paths/careers of our alumni). Then, the purpose is “de répondre à de nombreuses questions voire à lever des freins” (to answer many questions or even to remove barriers).
Now, I should check for any nuances. “Parcours” here likely refers to career paths or journeys, not just physical paths. “Anciens élèves” is alumni, so “our alumni's paths” makes sense. “Répondre à de nombreuses questions” is straightforward: answer many questions. “Voire à lever des freins” – “voire” is like “or even,” so “or even to remove barriers.”
I need to make sure the flow is natural in English. Maybe “a rare opportunity for students to exchange about the career paths of our alumni, and thus answer many questions or even overcome barriers.”
Wait, “échanger sur” could be “discuss” instead of “exchange about.” Also, “lever des freins” – “freins” can mean barriers or brakes. In this context, it's more about barriers. “Overcome barriers” sounds better than “remove barriers.”
Putting it all together: “It's a rare opportunity for students to discuss the career paths of our alumni and thus answer many questions or even overcome barriers.”
I should check if “thus” is the best conjunction. “And thus” or “and thereby” might work. Alternatively, “thereby” could be more formal. Let me see. “And thus” is okay, but maybe “and thereby” sounds more academic.
Another point: “pouvoir échanger” is “to be able to exchange,” but in English, “to exchange” might not be the best term here. “Discuss” or “talk about” is more natural.
So revised: “It's a rare opportunity for students to discuss the career paths of our alumni and thereby answer many questions or even overcome barriers.”
That seems accurate. Let me make sure there are no other nuances. The original sentence emphasizes that this opportunity is rare, which is captured. The purpose is to answer questions and overcome barriers, which is clear. I think that's a solid translation.
</think>
It's a rare opportunity for students to discuss the career paths of our alumni and thereby answer many questions or even overcome barriers.
texte long
2.7 ko
Pour quitter après traduction c'est l'option --single-turn, à ne pas confondre avec -no-cnv qui fait partir en boucle certains modèles jusqu'à “context full and context shift is disabled ⇒ stopping”.
Certains modèle comme EuroLLM-9B-q8_0 ne font pas le job correctement, sauf avec l'option --ctx-size 0 ⇒ 0 = loaded from model.
Les 4 modèles ont été jusqu'au bout. Je n'ai pas analysé le rendu …
8.8 ko
Comme pour le texte précédent.
En passant, avec le Intel® Core™ 13th Gen i7-1360P et EuroLLM-9B-Instruct-Q4_0 Statistics: 3.96 tokens/s, 1330 tokens, 335.56s soit presque 6 minutes.
19 ko
- EuroLLM-9B-Instruct-Q4_0 :
- context full and context shift is disabled ⇒ stopping
--ctx-size 20000: Ok mais ne produit pas une traduction mais un résumé en français
- Qwen3-14B-UD-Q5_K_XL :
- prompt is too long (4267 tokens, max 4092)
--ctx-size 20000: unable to load model
- Qwen2.5-coder-7b-instruct-q8_0
- prompt is too long (4267 tokens, max 4092)
--ctx-size 20000: Ok, mais pas sûr qu'il est “tout” traduit (pas relu complètement)
Taille du « context »
Avec Llama.cpp CUDA, RTX 3060 12 GB et opencode avec le modèle Qwen2.5-coder-7b-instruct-q8_0.gguf qui fait 8.1 Go sur le disque.
./build/bin/llama-server --port 8012 --jinja -m ~/Data/AI_Models/Qwen2.5-coder-7b-instruct-q8_0.gguf --ctx-size <CONTEXT IN BYTES> # puis time opencode run -m llamacpp/Qwen2.5-coder-7b-instruct-q8_0.gguf --attach=http://127.0.0.1:30781 --agent=plan "Explain async/await in JavaScript"
–ctx-size 4096
Error: the request exceeds the available context size, try increasing it
–ctx-size 12288
Async/await in JavaScript is used to simplify asynchronous code by making it look more synchronous. It allows you to write asynchronous code in a cleaner, more readable way. real 0m6,294s user 0m1,067s sys 0m0,286s
–ctx-size 40000
Async/await in JavaScript is a feature that simplifies the syntax for handling asynchronous operations, making the code more readable and manageable. It allows you to write asynchronous code in a synchronous manner, using the `async` and `await` keywords.
Here's a brief explanation:
- **Async Function**: A function declared with the `async` keyword is an async function. Inside an async function, you can use the `await` keyword to pause the function's execution until a Promise is resolved or rejected.
- **Await Keyword**: The `await` keyword is used to pause the execution of the async function until the Promise it's waiting on is settled (either resolved or rejected). This makes it easier to read and understand asynchronous code, as it behaves more like synchronous code.
- **Promise**: A Promise is an object that represents the eventual completion (or failure) of an asynchronous operation and its resulting value.
Here's an example to illustrate:
```javascript
async function fetchData() {
try {
let response = await fetch('https://api.example.com/data');
let data = await response.json();
console.log(data);
} catch (error) {
console.error('Error fetching data:', error);
}
}
```
In this example:
- The `fetchData` function is declared as `async`, allowing the use of `await`.
- `await fetch('https://api.example.com/data')` waits for the fetch request to complete.
- `await response.json()` waits for the JSON parsing to complete.
- `try...catch` is used to handle any errors that occur during the fetch or parsing operations.
Async/await makes it easier to handle multiple asynchronous operations in sequence, reducing the need for nested callbacks and improving the overall code structure.
real 0m16,037s
user 0m2,594s
sys 0m0,980s
mais ça peut aussi répondre :
Async/await in JavaScript is a syntax that simplifies the behavior of using promises synchronously. It makes the code cleaner and easier to read by allowing you to write asynchronous code in a more linear, synchronous style. real 0m2,224s user 0m1,139s sys 0m0,329s
–ctx-size 60000
Async/await is a syntax introduced in JavaScript to simplify working with promises, making asynchronous code more readable and easier to write. It allows you to write asynchronous code in a sequential, synchronous style, which can improve the overall flow and readability of your code. Here's a basic explanation:
- **Async Function**: A function declared with the `async` keyword returns a promise. This means that even if the function does not explicitly return a promise, it will implicitly return one.
- **Await Keyword**: The `await` keyword is used inside an async function to pause the execution of the function until a promise is resolved or rejected. This allows you to handle the resolved value directly, making the code look synchronous.
Here's a simple example to illustrate:
```javascript
// Define an async function
async function fetchData() {
// Await a promise
let response = await fetch('https://api.example.com/data');
let data = await response.json();
return data;
}
// Call the async function
fetchData().then(data => {
console.log(data);
}).catch(error => {
console.error('Error fetching data:', error);
});
```
In this example:
- `fetchData` is an async function that fetches data from an API.
- The `await` keyword is used to wait for the `fetch` and `response.json()` operations to complete.
- The resolved data is then logged to the console.
Using async/await, you can avoid deeply nested callbacks and make your asynchronous code more linear and easier to manage.
real 0m14,785s
user 0m2,381s
sys 0m0,984s
ou bien
Async/await is a feature in JavaScript that makes it easier to write and read asynchronous code, which is code that performs operations over time, like fetching data from a server or reading a file. It allows you to write asynchronous code in a more sequential and readable way, similar to synchronous code.
Here's a simple example to illustrate:
```javascript
// Without async/await
function fetchData(callback) {
setTimeout(() => {
callback('Data fetched');
}, 1000);
}
fetchData((data) => {
console.log(data);
});
```
With async/await:
```javascript
// With async/await
async function fetchData() {
return new Promise((resolve) => {
setTimeout(() => {
resolve('Data fetched');
}, 1000);
});
}
async function getData() {
try {
const data = await fetchData();
console.log(data);
} catch (error) {
console.error(error);
}
}
getData();
```
In the async/await example:
- The `fetchData` function returns a promise that resolves after 1 second.
- The `getData` function is marked with `async`, allowing the use of `await` inside it.
- `await fetchData()` pauses the execution of `getData` until the promise resolves, and then the resolved value is logged to the console.
This makes the asynchronous code easier to understand and maintain, as it looks more like synchronous code.
real 0m15,296s
user 0m2,434s
sys 0m0,918s
ou encore
`async/await` in JavaScript is a syntactic sugar built on top of Promises to simplify asynchronous code. It allows you to write asynchronous code in a more synchronous and readable manner.
Here's a brief explanation:
1. **Async Function**: You declare a function as `async` using the `async` keyword. This makes the function return a Promise.
```javascript
async function fetchData() {
// function body
}
```
2. **Await Keyword**: You use the `await` keyword inside an `async` function to pause the execution of the function until the Promise is resolved or rejected.
```javascript
async function fetchData() {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
return data;
}
```
3. **Error Handling**: You can handle errors using `try...catch` blocks.
```javascript
async function fetchData() {
try {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
return data;
} catch (error) {
console.error('Error fetching data:', error);
}
}
```
In summary, `async/await` makes asynchronous code easier to read and write by allowing you to write it in a more linear and synchronous style.
real 0m9,332s
user 0m2,170s
sys 0m0,836s
–ctx-size 74000
Ça passe bien, avec ce prompt, peut-être limite avec plus complexe …
–ctx-size 76000
# Llama-server démarre bien, mais plante au calcul slot update_slots: id 3 | task 0 | prompt processing progress, n_tokens = 2048, batch.n_tokens = 2048, progress = 0.220286 /home/.../llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:89: CUDA error CUDA error: out of memory current device: 0, in function alloc at /home/.../llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:466 cuMemCreate(&handle, reserve_size, &prop, 0) ...
–ctx-size 80000
# Llama-server plante dès le démarrage : ggml_backend_cuda_buffer_type_alloc_buffer: allocating 422.00 MiB on device 0: cudaMalloc failed: out of memory ggml_gallocr_reserve_n: failed to allocate CUDA0 buffer of size 442499072 graph_reserve: failed to allocate compute buffers llama_init_from_model: failed to initialize the context: failed to allocate compute pp buffers
Avec la version Q5 medium Qwen2.5-coder-7b-instruct-q8_0.gguf (fichier 5.4 Go)
- un
–ctx-size 100000fonctionne - mais
–ctx-size 120000plante dans le calcul (out of memory)

