<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Cognitive Quest]]></title><description><![CDATA[Explore Cognitive Quest for insights on Responsible AI, Generative AI, Programming, Ethics, Leadership, and Career. Stay informed and inspired. Subscribe now!]]></description><link>https://www.cognitive-quest.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 19 Apr 2026 11:07:59 GMT</lastBuildDate><atom:link href="https://www.cognitive-quest.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Red Teaming for GenAI Applications]]></title><description><![CDATA[What is Red Teaming?
In today’s rapidly evolving digital landscape, ensuring the safety and security of generative applications has become a paramount concern.
Traditionally, red teaming involves a group of security professionals, known as the red te...]]></description><link>https://www.cognitive-quest.com/red-teaming-for-genai-applications</link><guid isPermaLink="true">https://www.cognitive-quest.com/red-teaming-for-genai-applications</guid><category><![CDATA[redteaming]]></category><category><![CDATA[#responsibleai]]></category><category><![CDATA[red team]]></category><category><![CDATA[safety]]></category><category><![CDATA[AI Safety]]></category><category><![CDATA[evaluation metrics]]></category><category><![CDATA[Evaluation]]></category><category><![CDATA[genai]]></category><category><![CDATA[#chatbots]]></category><dc:creator><![CDATA[Amit Tyagi]]></dc:creator><pubDate>Sun, 21 Jul 2024 08:23:45 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721550048536/387df06d-0549-4088-8820-4ca68807a35b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>What is Red Teaming?</strong></p>
<p>In today’s rapidly evolving digital landscape, ensuring the safety and security of generative applications has become a paramount concern.</p>
<p>Traditionally, red teaming involves a group of security professionals, known as the red team, who adopt the mindset of potential adversaries to test the defenses of an organization. This practice is rooted in military strategy and has been widely adopted in cybersecurity to uncover weaknesses in networks, systems, and processes.</p>
<p>One effective strategy for identifying and mitigating risks in such systems is red teaming. But what exactly is red teaming, and why is it essential for generative AI applications?</p>
<p><strong>Historical Perspective of Red Teaming</strong></p>
<p>Red teaming has its roots in medieval times, where it was initially used for physical security. The Roman army, for example, employed strategies that involved simulating enemy attacks to test and strengthen their defenses. This practice continued through various historical periods, including colonial times and both World Wars, where military forces used red teaming to anticipate and counteract enemy strategies.</p>
<p>With the advent of the internet era, red teaming was adopted to combat cybersecurity threats. This approach has been continuously refined and has now been integrated into the most recent developments in Generative AI and large language models. Today, red teaming is not only a critical practice for addressing security threats but also for ensuring the safety and ethical use of AI models.</p>
<p><strong>Organizational Perspective of Red Teaming</strong></p>
<p>In an organizational or administrative context, red teaming involves distinct teams with specific roles and objectives:</p>
<ul>
<li><p><strong>Red Team</strong>: This team adopts the perspective of potential adversaries, actively seeking to identify and exploit vulnerabilities in systems. Their objective is to simulate real-world attacks to uncover weaknesses that need to be addressed.</p>
</li>
<li><p><strong>Blue Team</strong>: The blue team is responsible for defending the organization’s systems. They work to detect, respond to, and mitigate attacks, whether real or simulated. Their objective is to improve the organization’s defensive capabilities and resilience against threats</p>
</li>
<li><p><strong>Purple Team</strong>: This team bridges the gap between the red and blue teams. They facilitate collaboration and communication between the two, ensuring that the insights and findings from red team exercises are effectively integrated into the blue team’s defensive strategies. The objective of the purple team is to enhance the overall security posture by combining offensive and defensive insights.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1723451850723/a6d54fbd-4b6d-4ada-991b-929170974cc9.png" alt="Red Teaming Organization" class="image--center mx-auto" /></p>
<p>These teams collectively contribute to a comprehensive approach to security, enabling organizations to proactively identify and address potential threats.</p>
<p><strong>Why should I care?</strong></p>
<p>Red teaming is crucial because it simulates real-world attacks on systems, helping to identify vulnerabilities before malicious actors can exploit them. This proactive approach is particularly vital for generative AI applications, which are increasingly used in sensitive and high-stakes environments. Understanding and implementing red teaming can significantly enhance the robustness and safety of these systems.</p>
<p><strong>Red Teaming in Generative AI</strong></p>
<p>In the context of generative AI, red teaming extends beyond traditional cybersecurity measures. It involves deliberately probing AI models to uncover biases, vulnerabilities, and potential misuse scenarios. This approach ensures that generative models do not inadvertently generate harmful, biased, or unethical content.</p>
<h3 id="heading-in-the-context-of-responsible-ai"><strong>In the Context of Responsible AI</strong></h3>
<p><strong>Risk Detection and Measurement</strong></p>
<p>For responsible AI, red teaming plays a pivotal role in risk detection and measurement. By simulating harmful scenarios, red teams can identify and quantify risks such as:</p>
<ul>
<li><p><strong>Sexual Content:</strong> Ensuring generative models do not produce inappropriate or explicit content.</p>
</li>
<li><p><strong>Self-harm</strong>: Preventing models from generating content that encourages or glorifies self-harm.</p>
</li>
<li><p><strong>Hate Speech</strong>: Detecting and mitigating the generation of hateful or discriminatory content.</p>
</li>
<li><p><strong>Unfairness</strong>: Identifying biases that lead to unfair treatment of individuals or groups.</p>
</li>
<li><p><strong>Violence</strong>: Ensuring models do not produce violent or inciting content.</p>
</li>
</ul>
<p><strong>Jailbreak</strong></p>
<p>Additionally, red teaming helps in identifying “jailbreak” prompts—specific inputs designed to bypass the ethical and safety constraints of AI models. Detecting and addressing these prompts is crucial to maintaining the integrity of generative applications.</p>
<p><strong>As a Risk Mitigation Tool</strong></p>
<p>While red teaming is invaluable for identifying risks, the insights gained are used to develop effective risk mitigation strategies. Purple teaming, in particular, helps in formulating and implementing these mitigation plans. The collaboration between red and blue teams ensures that vulnerabilities are not only identified but also addressed comprehensively to enhance the safety and robustness of AI models. We will explore in further articles how risks could be managed.</p>
<h3 id="heading-implementation-of-red-teaming"><strong>Implementation of Red Teaming</strong></h3>
<p>One effective approach to implementing red teaming in generative AI is through tools like PromptFlow in Azure AI Studio. These platforms facilitate the creation and execution of harmful and jailbreak prompts in a simulated environment to test AI models.</p>
<p><strong>Setting Up the Chatbot</strong></p>
<p>First, we need a chatbot that can mimic the target system for red teaming. In this example, we have already deployed a sample chatbot (wikipedia-chatbot) in PromptFlow in AI Studio. The steps to deploy this chatbot are not shown here but will be documented in a separate article. &lt;Let me know in comments if an article on this topic will be useful&gt;.</p>
<p><strong>Interacting with the Deployed Chatbot</strong></p>
<p>We use a class ChatbotAPI to interact with our deployed chatbot through its REST API endpoint.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">import</span> ssl
<span class="hljs-keyword">import</span> json
<span class="hljs-keyword">import</span> urllib.request
<span class="hljs-keyword">from</span> dotenv <span class="hljs-keyword">import</span> load_dotenv

<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">ChatbotAPI</span>:</span>

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__init__</span>(<span class="hljs-params">self, api_url, api_key, deployment_name</span>):</span>
        self.url = api_url
        self.api_key = api_key
        self.deployment_name = deployment_name

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">allowSelfSignedHttps</span>(<span class="hljs-params">self, allowed</span>):</span>
        <span class="hljs-keyword">if</span> allowed <span class="hljs-keyword">and</span> <span class="hljs-keyword">not</span> os.environ.get(<span class="hljs-string">'PYTHONHTTPSVERIFY'</span>, <span class="hljs-string">''</span>) <span class="hljs-keyword">and</span> getattr(ssl, <span class="hljs-string">'_create_unverified_context'</span>, <span class="hljs-literal">None</span>):
            ssl._create_default_https_context = ssl._create_unverified_context

    <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">__call__</span>(<span class="hljs-params">self, question</span>):</span>
        self.allowSelfSignedHttps(<span class="hljs-literal">True</span>)
        data = {
            <span class="hljs-string">"question"</span>: question,
            <span class="hljs-string">"chat_history"</span>: []
        }
        body = str.encode(json.dumps(data))
        headers = {
            <span class="hljs-string">'Content-Type'</span>: <span class="hljs-string">'application/json'</span>,
            <span class="hljs-string">'Authorization'</span>: <span class="hljs-string">'Bearer '</span> + self.api_key,
            <span class="hljs-string">'azureml-model-deployment'</span>: self.deployment_name
        }
        req = urllib.request.Request(self.url, body, headers)
        <span class="hljs-keyword">try</span>:
            response = urllib.request.urlopen(req)
            result = response.read()
            <span class="hljs-keyword">return</span> result
        <span class="hljs-keyword">except</span> urllib.error.HTTPError <span class="hljs-keyword">as</span> error:
            error_message = error.read().decode(<span class="hljs-string">"utf8"</span>, <span class="hljs-string">'ignore'</span>)
            error_json = json.loads(error_message)
            message = error_json[<span class="hljs-string">"error"</span>][<span class="hljs-string">"message"</span>]
            <span class="hljs-keyword">return</span> message

load_dotenv(override=<span class="hljs-literal">True</span>)

api_url = os.getenv(<span class="hljs-string">'API_URL_PF'</span>)
api_key = os.getenv(<span class="hljs-string">'API_KEY_PF'</span>)
deployment_name = os.getenv(<span class="hljs-string">'DEPLOYMENT_NAME_PF'</span>)

chatbot = ChatbotAPI(api_url=api_url, api_key=api_key, deployment_name=deployment_name)
</code></pre>
<p><strong>Callback Function for Simulator</strong></p>
<p>We define a callback function to pass to the simulator, which will use the ChatbotAPI wrapper and return responses in the specific format that the simulator expects.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> asyncio
<span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> Any, Dict

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">chatbot_callback</span>(<span class="hljs-params">prompt: Dict[str, Any]</span>) -&gt; Dict[str, Any]:</span>
    question = prompt.get(<span class="hljs-string">"question"</span>, <span class="hljs-string">""</span>)
    response = <span class="hljs-keyword">await</span> asyncio.to_thread(chatbot, question)
    <span class="hljs-keyword">return</span> {
        <span class="hljs-string">"question"</span>: question,
        <span class="hljs-string">"answer"</span>: response
    }
</code></pre>
<p><strong>Generating Harmful and Jailbreak Prompts</strong></p>
<p>Using the simulator, we generate a set of either harmful prompts or jailbreak prompts.</p>
<pre><code class="lang-python"><span class="hljs-comment"># Assuming AdversarialScenario and AdversarialSimulator are defined and imported</span>
<span class="hljs-keyword">from</span> promptflow.evals.synthetic.adversarial_scenario <span class="hljs-keyword">import</span> AdversarialScenario

simulator = AdversarialSimulator(azure_ai_project={
    <span class="hljs-string">"subscription_id"</span>: os.getenv(<span class="hljs-string">"SUBSCRIPTION_ID"</span>),
    <span class="hljs-string">"resource_group_name"</span>: os.getenv(<span class="hljs-string">"RESOURCE_GROUP"</span>),
    <span class="hljs-string">"project_name"</span>: os.getenv(<span class="hljs-string">"PROJECT_NAME"</span>),
    <span class="hljs-string">"credential"</span>: os.getenv(<span class="hljs-string">"CREDENTIAL"</span>)
})

<span class="hljs-comment"># Generate harmful prompts</span>
harmful_outputs = <span class="hljs-keyword">await</span> simulator(
    scenario=AdversarialScenario.ADVERSARIAL_QA,
    target=chatbot_callback,
    max_conversation_turns=<span class="hljs-number">1</span>,
    max_simulation_results=<span class="hljs-number">100</span>,
    jailbreak=<span class="hljs-literal">False</span>
)

<span class="hljs-comment"># Generate jailbreak prompts</span>
jailbreak_outputs = <span class="hljs-keyword">await</span> simulator(
    scenario=AdversarialScenario.ADVERSARIAL_QA,
    target=chatbot_callback,
    max_conversation_turns=<span class="hljs-number">1</span>,
    max_simulation_results=<span class="hljs-number">50</span>,
    jailbreak=<span class="hljs-literal">True</span>
)

<span class="hljs-comment"># Convert outputs to a format suitable for evaluation</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">to_eval_qa_json_lines</span>(<span class="hljs-params">outputs</span>):</span>
    json_lines = []
    <span class="hljs-keyword">for</span> output <span class="hljs-keyword">in</span> outputs:
        json_lines.append(json.dumps({<span class="hljs-string">"question"</span>: output[<span class="hljs-string">"template_parameters"</span>][<span class="hljs-string">"conversation_starter"</span>], <span class="hljs-string">"answer"</span>: output[<span class="hljs-string">"messages"</span>][<span class="hljs-number">1</span>][<span class="hljs-string">"content"</span>]}))
    <span class="hljs-keyword">return</span> <span class="hljs-string">"\n"</span>.join(json_lines)

<span class="hljs-comment"># Create datasets</span>
harmful_dataset = to_eval_qa_json_lines(harmful_outputs)
jailbreak_dataset = to_eval_qa_json_lines(jailbreak_outputs)

<span class="hljs-comment"># Save datasets to files</span>
<span class="hljs-keyword">with</span> open(<span class="hljs-string">"harmful_dataset.jsonl"</span>, <span class="hljs-string">"w"</span>) <span class="hljs-keyword">as</span> f:
    f.write(harmful_dataset)

<span class="hljs-keyword">with</span> open(<span class="hljs-string">"jailbreak_dataset.jsonl"</span>, <span class="hljs-string">"w"</span>) <span class="hljs-keyword">as</span> f:
    f.write(jailbreak_dataset)
</code></pre>
<p><strong>Evaluating the Chatbot</strong></p>
<p>Using PromptFlow’s evaluation functionality, we evaluate the generated dataset to obtain key performance indicators (KPIs) or Metrics. To quantify the risks, safety metrics are calculated. These metrics include:</p>
<ul>
<li><p><strong>Sexual Content</strong>: Frequency and severity of inappropriate content generation.</p>
</li>
<li><p><strong>Self-harm</strong>: Instances of content encouraging self-harm.</p>
</li>
<li><p><strong>Hate Speech</strong>: Detection of hateful or discriminatory content.</p>
</li>
<li><p><strong>Unfairness</strong>: Identification of biases and unfair treatment.</p>
</li>
<li><p><strong>Violence</strong>: Occurrences of violent or inciting content.</p>
</li>
</ul>
<pre><code class="lang-python"><span class="hljs-keyword">from</span> promptflow.evals.evaluate <span class="hljs-keyword">import</span> evaluate
<span class="hljs-keyword">from</span> promptflow.evals.evaluators <span class="hljs-keyword">import</span> (
    ViolenceEvaluator,
    SexualEvaluator,
    SelfHarmEvaluator,
    HateUnfairnessEvaluator,
    ContentSafetyEvaluator
)

<span class="hljs-comment"># Define the project scope of AI studio</span>
project_scope = {
    <span class="hljs-string">"subscription_id"</span>: os.getenv(<span class="hljs-string">"SUBSCRIPTION_ID"</span>),
    <span class="hljs-string">"resource_group_name"</span>: os.getenv(<span class="hljs-string">"RESOURCE_GROUP"</span>),
    <span class="hljs-string">"project_name"</span>: os.getenv(<span class="hljs-string">"PROJECT_NAME"</span>)
}

<span class="hljs-comment"># Define evaluators</span>
evaluators = {
  <span class="hljs-string">'violence'</span>: ViolenceEvaluator(project_scope=project_scope),
  <span class="hljs-string">'sexual'</span>: SexualEvaluator(project_scope=project_scope),
  <span class="hljs-string">'self_harm'</span>: SelfHarmEvaluator(project_scope=project_scope),
  <span class="hljs-string">'hate_unfairness'</span>: HateUnfairnessEvaluator(project_scope=project_scope),
  <span class="hljs-string">'content_safety'</span>: ContentSafetyEvaluator(project_scope=project_scope)
}

<span class="hljs-comment"># Define evaluator config - Target is the data</span>
evaluator_config = {
    <span class="hljs-string">"default"</span>: {
        <span class="hljs-string">"question"</span>: <span class="hljs-string">"${data.question}"</span>,
        <span class="hljs-string">"answer"</span>: <span class="hljs-string">"${data.answer}"</span>
    }
}

<span class="hljs-comment"># Evaluate harmful dataset</span>
results_harmful = evaluate(
    data=<span class="hljs-string">"harmful_dataset.jsonl"</span>,
    evaluation_name=<span class="hljs-string">f"red_teaming_eval_harmful-<span class="hljs-subst">{time()}</span>"</span>,
    evaluator_config=evaluator_config,
    evaluators=evaluators
)

<span class="hljs-comment"># Evaluate jailbreak dataset</span>
results_jailbreak = evaluate(
    data=<span class="hljs-string">"jailbreak_dataset.jsonl"</span>,
    evaluation_name=<span class="hljs-string">f"red_teaming_eval_jailbreak-<span class="hljs-subst">{time()}</span>"</span>,
    evaluator_config=evaluator_config,
    evaluators=evaluators
)
</code></pre>
<p>We used PromptFlow SDK and Azure AI studio to implement automated safety evaluations. Microsoft also released PyRIT (python package) tool that helps setup manual and as well as automated Red Teaming. &lt;Let me know in comments if you want me to explain PyRIT in a separate article&gt;.</p>
<p><strong>Prepare and Implement Safety Risk Mitigation Plan</strong></p>
<p>Finally, based on the insights gained from red teaming, a comprehensive safety risk mitigation plan is prepared. This plan outlines the steps needed to address identified risks and enhance the overall safety and integrity of the generative AI model. The detailed implementation of these mitigation strategies will be discussed in the next article.</p>
<p>By systematically applying red teaming principles to generative AI, organizations can proactively identify and address potential risks, ensuring the responsible and ethical deployment of these powerful technologies.</p>
]]></content:encoded></item><item><title><![CDATA[New Post coming soon. Stay tuned.]]></title><description><![CDATA[New Post coming soon...]]></description><link>https://www.cognitive-quest.com/new-post-coming-soon-stay-tuned</link><guid isPermaLink="true">https://www.cognitive-quest.com/new-post-coming-soon-stay-tuned</guid><dc:creator><![CDATA[Amit Tyagi]]></dc:creator><pubDate>Mon, 01 Jul 2024 15:55:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1721548590808/76023ab2-df5a-407b-99c7-02f4aeb69fbd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>New Post coming soon...</p>
]]></content:encoded></item><item><title><![CDATA[Machine Learning during Covid-19]]></title><description><![CDATA[Note: I published this article on LinkedIn 4 years ago.
In the time of COVID-19, almost all aspects of our lives are affected, some being slightly resilient and others being heavily hit. Even the weather forecast are going to be affected by corona wh...]]></description><link>https://www.cognitive-quest.com/machine-learning-during-covid-19</link><guid isPermaLink="true">https://www.cognitive-quest.com/machine-learning-during-covid-19</guid><category><![CDATA[Machine Learning]]></category><category><![CDATA[Covid-19]]></category><category><![CDATA[Data Science]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Amit Tyagi]]></dc:creator><pubDate>Fri, 28 Jun 2024 14:40:26 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719585954995/53f047ba-8cc0-41c9-8c16-0c4ab42a85ff.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Note</strong>: I published this article on LinkedIn 4 years ago.</p>
<p>In the time of COVID-19, almost all aspects of our lives are affected, some being slightly resilient and others being heavily hit. Even the weather forecast are going to be affected by corona while weather is not. The reason for that is the dependencies of the weather forecasting models on the <a target="_blank" href="https://www.accuweather.com/en/severe-weather/coronavirus-canceled-flights-could-affect-weather-forecasting-at-exactly-the-wrong-time/711234">data collected by aircraft</a> e.g., Wind speed, Wind direction, Air pressure, Air temperature together with Temporal and Spatial information. As per <a target="_blank" href="https://www.ecmwf.int/">ECMWF</a>, The data collected by aircraft is <a target="_blank" href="https://www.ecmwf.int/en/about/media-centre/news/2020/drop-aircraft-observations-could-have-impact-weather-forecasts">second only to satellite data</a> in terms of their significance for weather predictions. Now due to the very heavy reduction in air traffic the weather forecast accuracy will be slightly impacted but the impact will still be <a target="_blank" href="https://www.aljazeera.com/news/2020/03/weather-predictions-affected-coronavirus-outbreak-200326104501955.html">statistically significant</a>.</p>
<p>A lot of work is already happening to cope with COVID-19 pandemic. I see a lot of articles on the internet about how Machine Learning or AI could be used to combat Corona Virus e.g., Forecasting models to predict the outbreak, Computer Vision models to better screen the existence of Corona infection in the X-ray/CT-scan images.</p>
<p>However, I do not see much contribution in the direction of modeling the impact of corona virus on businesses. This is obvious that majority of the businesses are affected by the pandemic but the question is how can we forecast the demand or sales given the corona pandemic in a particular region. The answer to this question becomes even more interesting for the companies who are currently using ML models for the forecasting of their business. How could the models running in production incorporate the COVID-19 related data to model the impact on business and adjust the forecasts accordingly? Here at Continental, we are running a number of forecasting models to predict a number of things e.g., Demand, Revenue, Sales, Raw material requirements.</p>
<p>I could think of couple of strategies which I am planning to test if they help to adapt the behavior of our forecasting models:</p>
<p>A very naive approach could be just to add a binary feature which could emphasize on the time-period when an outlier event e.g., COVID-19 took place which of course could be different for different regions. Perhaps a step a forward approach could be to also add the actual effect of this event (COVID-19) hence the model not only learns when the event happened but also how the event evolved within that period e.g., daily new cases, daily recovered cases, daily deaths due to COVID-19. This also depends upon the aggregation level (forecast step size) of the data. I could also think of adding a data set e.g., employment data, stock data which could exhibit the impact of COVID-19, and we use this effect from aforementioned data sets as a feature to learn the behavior. Of course, the semantics of such a data set should be close to the business we are forecasting. I wrote this post to start a discussion around this topic to brainstorm on the ideas we can use to combat with the adverse effect on our respective businesses. We can already see the impact of COVID-19 on the employment e.g., In the US alone, <a target="_blank" href="https://www.theguardian.com/business/2020/apr/09/us-unemployment-filings-coronavirus">16 million jobs are already gone</a> while more than 6.6 million alone in the week of 9th April, 2020. I hope that companies will quickly develop resilience to the impact and leave minimal impact on employment. In the meantime, we continue to contribute!</p>
<p>Looking forward to your ideas/comments on modeling the impact of COVID-19 in our ML models.</p>
]]></content:encoded></item></channel></rss>