Change the filter settings for Gemini in a NodeJS environment.

この記事は約6分で読めます。
スポンサーリンク

Things I want to do

Gemini has filters in place to prevent responses that are violent or sexual.

We will change the filter to make it more tolerant of violence and sexual content.

Instructions on how to use Gemini are summarized below.

スポンサーリンク

Opening Act

Gemini is generally a good program, so it rarely sends back anything that would trigger a filter.

The above answer isn’t a ‘good’ answer because it’s filtered; it’s the same response regardless of the filter settings. (Of course, it’s not the exact same answer.)

It seems that having people play specific roles is a good way to get them to provide filtered answers.

When the filter is actually applied, no response is returned, as shown above.

Looking at the console, you can see that it was blocked.

スポンサーリンク

implementation

Import HarmCategory and HarmBlockThreshold.

import { GoogleGenerativeAI, HarmCategory, HarmBlockThreshold } from "@google/generative-ai";

Next, we will modify the following call.

    const result = await model.generateContent(prompt);

After the correction, it will look like this:

    const result = await model.generateContent({
        contents: [
          {
            role: 'user',
            parts: [
              {
                text: prompt,
              }
            ],
          }
        ],
        safetySettings:[
            {
                "category": HarmCategory.HARM_CATEGORY_HARASSMENT,
                "threshold": HarmBlockThreshold.BLOCK_NONE
            },
            {
                "category": HarmCategory.HARM_CATEGORY_HATE_SPEECH,
                "threshold": HarmBlockThreshold.BLOCK_NONE
            },
            {
                "category": HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
                "threshold": HarmBlockThreshold.BLOCK_NONE
            },
            {
                "category": HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
                "threshold": HarmBlockThreshold.BLOCK_NONE
            },
        ]
      })

The categories ARM_CATEGORY_HARASSMENT, HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, and HARM_CATEGORY_DANGEROUS_CONTENT are set to BLOCK_NONE (no filter).

These categories are HarmCategory. The Gemini model only supports HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, and HARM_CATEGORY_DANGEROUS_CONTENT. All other categories are only available in PaLM 2 (previous version) models.

The following website contains the above statement, which may be misinterpreted as indicating that only HARM_CATEGORY_HARASSMENT is supported, but this is a mistranslation. The correct statement is: ‘Only HARM_CATEGORY_HARASSMENT, HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, and HARM_CATEGORY_DANGEROUS_CONTENT are supported.’

テキスト生成  |  Gemini API  |  Google AI for Developers
Gemini API を使用してチャットアプリとテキスト生成アプリの構築を開始する

For a simple implementation method, please refer to the following page.

スポンサーリンク

Result

They’ve started arguing with me.

スポンサーリンク

Websites I used as references

自由で期間限定無料の Gemini API で遊ぶ(画像認識もできるし悪口も言わせられるぞ!)|ぶるぺん/blue.pen5805
Gemini って知ってますか? めちゃくちゃ適当な説明をすると Google の ChatGPT みたいなもんです それの Gemini Pro っていう中ボスぐらいのやつのAPI (なんかプログラムとかで呼べるやつ)が公開されたんですが、なんとこれが正式リリースまでの期間限定で無料で使えます ので、今回はこれを...
テキスト生成  |  Gemini API  |  Google AI for Developers
Gemini API を使用してチャットアプリとテキスト生成アプリの構築を開始する
安全性設定  |  Gemini API  |  Google AI for Developers

コメント

タイトルとURLをコピーしました