Gemini 2.0 flash
5
About
Intelligent and fast AI with next generation features.
Percs
High Accuracy
Very Intelligent
High Speed
Multilingual
Settings
Temperature- The temperature of the model. Higher values make the model more creative and lower values make it more focused.
Top P- Tokens are selected from the most to least probable until the sum of their probabilities equals this value. Use a lower value for less random responses and a higher value for more random responses.
Top K- For each token selection step, the top_k tokens with the highest probabilities are sampled. Then tokens are further filtered based on top_p with the final token selected using temperature sampling. Use a lower number for less random responses and a higher number for more random responses.
Context length- The maximum number of tokens to use as input for a model.
Response length- The maximum number of tokens to generate in the output.