<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[AI Driven Development ]]></title><description><![CDATA[Unlocking Innovation: Interesting and Smart Ways People Are Using AI in Software Development]]></description><link>https://www.aidriven.dev</link><generator>Substack</generator><lastBuildDate>Thu, 07 May 2026 11:11:46 GMT</lastBuildDate><atom:link href="https://www.aidriven.dev/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Mohamed Taha]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[aidrivendev@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[aidrivendev@substack.com]]></itunes:email><itunes:name><![CDATA[Mohamed]]></itunes:name></itunes:owner><itunes:author><![CDATA[Mohamed]]></itunes:author><googleplay:owner><![CDATA[aidrivendev@substack.com]]></googleplay:owner><googleplay:email><![CDATA[aidrivendev@substack.com]]></googleplay:email><googleplay:author><![CDATA[Mohamed]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[ChatGPT Builds Chrome Extensions for Me: Here's How]]></title><description><![CDATA[How I Used ChatGPT to Navigate the World of Chrome Extension Development: A Personal Exploration]]></description><link>https://www.aidriven.dev/p/chatgpt-builds-chrome-extensions</link><guid isPermaLink="false">https://www.aidriven.dev/p/chatgpt-builds-chrome-extensions</guid><dc:creator><![CDATA[Mohamed]]></dc:creator><pubDate>Sun, 10 Mar 2024 23:04:02 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" width="3840" height="2160" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2160,&quot;width&quot;:3840,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a computer generated image of a network and a laptop&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a computer generated image of a network and a laptop" title="a computer generated image of a network and a laptop" srcset="https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1674027444454-97b822a997b6?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw4fHxjaGF0Z3B0fGVufDB8fHx8MTcxMDExMDY3NXww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@growtika">Growtika</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>Over the past few months, I built and published a few Chrome extensions such as <a href="https://chromewebstore.google.com/detail/jiragpt/gjlghkkdjbmnhnommhkpaljgipomdpln">JiraGPT</a> completely coded by ChatGPT. In this post, I&#8217;ll share how I did that.</p><p>Please keep in mind, that I'm not claiming it's an error-free process, nor can you build all types of extensions with ChatGPT, but I believe many simple to medium complexity extensions can be constructed using it, even the complex ones, of course, with extra effort on your end.</p><p>One more thing to note is that even though you can get ChatGPT to build you the same thing, even if you're primarily a backend engineer (like me), it helps a bit to achieve the perfect results. If you are non-technical, you can get a good output, but you probably won't, for example, be able to prompt ChatGPT about security features, getting or saving an API key, etc.</p><p></p><h3>Why Extensions Are a Great Use-case for ChatGPT vs. Other Software Types?</h3><p>Before we dive into my process, it&#8217;s crucial to discuss why extensions are a perfect use-case for ChatGPT. Unlike many software types, such as web and mobile apps, a Chrome extension in most cases is just a small number of files, so it's somewhat manageable and can fit into the LLM (ChatGPT) context window. Keep in mind, this is only possible if you are not using a JavaScript framework such as React; otherwise, it becomes a lot.</p><p></p><h3>So Why Should You Bother Even If You Don&#8217;t Care About Building Chrome Extensions?</h3><p>Well, if you don&#8217;t care about extensions, I assume you at least care about how to use ChatGPT and interact with LLMs in general. If you don&#8217;t, you should. XD.</p><p>The core idea of interacting with LLMs (ChatGPT) is having a good prompt, so we will discuss some tips that you can apply no matter what you want from the LLM. Prompt engineering can be seen as the technique of carefully crafting instructions to get an ML model to do precisely what you want. Mastering prompt engineering results in great outcomes when dealing with LLMs. This applies to both ChatGPT and GPT APIs if you want to build apps on top of them.</p><p></p><h3>My process for building Chrome extensions with ChatGPT</h3><p></p><h4><strong>1. High-level </strong>Understanding<strong> of How Chrome Extensions Work</strong></h4><p>Grasping the basics of how Chrome extensions operate is crucial, even if it initially seems unnecessary. This foundational knowledge helped me understand why ChatGPT could be an excellent tool for extension development. Typically, a Chrome extension is built using a few key files, each serving a specific purpose:</p><ul><li><p><strong>Manifest.json</strong>: Think of this as the extension's ID card. It tells Chrome important details about the extension, like its name, version, and which files it needs to function.</p></li><li><p><strong>popup.html (and associated styling files like style.css or popup.css)</strong>: This is the visual part you interact with when you click the extension icon in your browser. The HTML file is the structure, and the CSS file makes it look good.</p></li><li><p><strong>popup.js</strong>: This JavaScript file brings life to the popup.html, handling user interactions and defining what happens when you click buttons or enter information.</p></li><li><p><strong>content.js</strong>: Imagine this as a behind-the-scenes worker that interacts directly with the web pages you visit. It can read or modify the content of a webpage, enabling the extension to add features or information directly on the page.</p></li><li><p><strong>background.js</strong>: This is the extension's thinker. Running separately from the webpage you're viewing, it manages broader tasks and logic that don't fit into the popup or content scripts, like listening for specific actions or managing complex logic.</p></li></ul><p>Understanding the role of each of these files simplifies troubleshooting and enhances your ability to create or modify extensions. Additionally, knowing how to run or install an extension locally is incredibly useful for testing and development.</p><p></p><h4><strong>2. Write Clear and Very Specific Instructions</strong></h4><p>I always treat ChatGPT as if it&#8217;s a very smart intern but needs everything specified and planned for them to do their job. What does this mean? It means giving a clear description of what the extension should do, what the UI should look like, how the user is going to interact with it, and writing down every feature it should have.</p><p>Here is the prompt I used for building the JiraGPT Chrome extension:</p><pre><code><code>You are a software engineer with expertise is building chrome extension. You are working on JiraGPT which is a chrome extension that does the following:

1. the extension adds an icon in Jira advanced search form (JQL) when user visit the advanced search page.

2. When the JiraGPT icon is on (by default), the form now can expect a natural language query instead of JQL and once the user hits another JiraGPT button, the user's natural query should be transformed into a valid JQL. obviously this is done by calling OpenAI GPT model which you should provide the code to as well.

You must provide everything and make sure to use Manifest V3. and also make sure before the user must add their openai api key in a pop-up.

First, i want you as a great software engineer to make sure you understand the requirements and ask questions if anything is missing before starting to develop the extension 

</code></code></pre><p></p><h4><strong>3. Make Sure ChatGPT Understands the Requirements Before Starting Development</strong></h4><p>Just like a good software engineer would do, that is, they would make sure they understand the requirements of the task first and ask questions about it. You should instruct ChatGPT to do the same. You would be surprised by what it comes back with. It will ask questions that are going to make you understand the product better yourself, and it will help you become more specific and cover areas you hadn&#8217;t thought of.</p><p>You can see that I already asked ChatGPT at the end of the prompt, and here is its response. Notice:</p><pre><code>Certainly, understanding the requirements is a critical first step in any software development process. Let me summarize the requirements as I understand them:

Requirements for JiraGPT Chrome Extension
Jira Advanced Search Page: The extension should recognize when the user is on Jira's advanced search page.

JiraGPT Icon: An icon should be added to the Jira advanced search form (presumably next to the JQL input field).

Natural Language Queries:

When the JiraGPT icon is "on" (which it is by default), the form should expect a natural language query instead of JQL.
A separate JiraGPT button should be provided to trigger the transformation of the natural language query into a valid JQL query.
GPT Integration: The transformation from natural language to JQL is to be done by calling an OpenAI GPT model.

API Key Configuration: Before using the extension, the user should be prompted to enter their OpenAI API key.

Manifest V3: The extension should be developed using Chrome's Manifest V3 specifications.

Questions
UI/UX: Are there any specific UI/UX guidelines or mockups for how the icon and button should look or where exactly they should be placed?

Error Handling: How should the extension handle scenarios where the natural language query cannot be accurately transformed into a JQL query?

Settings: Should the extension provide any settings or options for the user to customize its behavior? For example, turning off the "natural language" feature?

Permissions: Are there specific permissions that should be requested from the user? For example, reading the current page, etc.

API Rate Limit: How should the extension handle rate-limiting or other API-related issues?

User Authentication: Should the OpenAI API key be stored securely? If so, do you have any preferences for how this should be done?

Compatibility: Is compatibility with specific versions of Jira or Chrome required?

Offline Mode: Should the extension offer any offline functionality?

Analytics: Do you want to collect any analytics data from the extension?

User Feedback: Should there be a way for users to provide feedback on the extension?

Updates: How should the extension handle updates? Should it auto-update or notify the user of available updates?

Would you like to clarify or add anything to these points?</code></pre><p>I then respond to those questions, and then, like a good engineer, it replies with a refined list of requirements so that we are both aligned.</p><p></p><h4>4. Iterative Development: Feature by Feature</h4><p>Once we set the feature list, I guide it to begin with one feature and not move to the next until it&#8217;s working well. This way, everything gets done properly without ending up in a big pile of errors. If it tries to rush, I remind it to take it slow, feature by feature, just like being agile. We make sure each part is working right before we tackle the next. kind of like being agile.</p><p>This means there&#8217;s a lot of back and forth. It&#8217;s more like I&#8217;m helping it stay on track. If something isn&#8217;t right or I want a change, we address it then and there. We don&#8217;t leave anything hanging. Focusing on one feature at a time helps keep things clear and straightforward.<br><br>Here is one of the back-and-forth iterations:</p><pre><code>User (ME):
two issues upon testing:
1. the go button looks a bit ugly and too close to the icon
2. you didn't actually mock/return the JQL so when clicking the "Go" button, nothing happens


ChatGPT:
I apologize for the oversight. Let's address these issues:

UI Appearance: To improve the UI, we can add some CSS styling to the "Go" button and add some spacing between the icon and the button.

Mock JQL: I missed adding the mock JQL response in the background script. We'll update it to return a sample JQL query when clicked.

Updated content.js
For the UI improvement, I'm adding some inline styles to the "Go" button.

.....</code></pre><h4></h4><h4>5. Keeping ChatGPT in the Loop: Context Management</h4><p>Through trial and error, I've learned something important: just like us, ChatGPT can get a bit lost if we go too far without a recap. Imagine bringing a new software engineer onto your team who doesn&#8217;t know what&#8217;s been built already. They might start from scratch instead of building on what exists. ChatGPT can do the same because it can only keep so much in its "mind" at once. When it loses track of what we've done, it might try to redo things or get a bit mixed up.</p><p>This happens because ChatGPT has to manage its memory. Think of it as having to forget some things to learn new stuff, like juggling balls where you have to drop one to catch another. When I notice it's starting to "hallucinate" or forget our progress, I hit pause. I remind it of what we've already done, kind of like giving that new engineer a project update. It helps ChatGPT remember where we are and what's next, making sure we build on our work instead of going in circles.</p><p></p><p>Here is an example prompt of how I usually do that.  Note: this is very similar to the concepts used by autonomous AI agents where with every iteration you pass on the main goal, the list of tasks and each task status, and the next task to work on to the LLM/agent. the challenge here is only to pass relevant data and be mindful of the context window limit.</p><pre><code>Remember, we are building a chrome extension that does X and we have a list of features as follows:
1. feature X. status: Done
2. feature Y. status: Done
3. feature Z. status: Not started


Here is our source code so far:

manifest.json:

    {
        "manifest_version": 3,
        "name": "SEO Tags Extractor",
        "version": "1.0",
        "permissions": ["activeTab"],
        "action": {
            "default_popup": "popup.html"
        },
        "background": {
            "service_worker": "js/background.js"
        }
    }
    
popup.html

    &lt;!DOCTYPE html&gt;
    &lt;html&gt;
    &lt;head&gt;
        &lt;link rel="stylesheet" type="text/css" href="css/popup.css"&gt;
    &lt;/head&gt;
    &lt;body&gt;
        &lt;h1&gt;SEO Tags&lt;/h1&gt;
        &lt;div id="tags"&gt;&lt;/div&gt;
        &lt;script src="js/popup.js"&gt;&lt;/script&gt;
    &lt;/body&gt;
    &lt;/html&gt;
 

popup.js:

    chrome.runtime.onMessage.addListener((message, sender, sendResponse) =&gt; {
        let tagsDiv = document.getElementById('tags');
        for (let tag in message) {
            let p = document.createElement('p');
            p.textContent = tag + ': ' + message[tag];
            tagsDiv.appendChild(p);
        }
        console.log('tagsDiv: ', tagsDiv);
    });
    
content.js:

    let metaTags = document.getElementsByTagName('meta');
    console.log('metaTags: ', metaTags);
    let seoTags = {};

    for (let tag of metaTags) {
        if (tag.name.toLowerCase().startsWith('og:') || tag.name.toLowerCase() === 'description') {
            seoTags[tag.name] = tag.content;
        }
    }

    chrome.runtime.sendMessage(seoTags);
    
background.js:

    chrome.action.onClicked.addListener((tab) =&gt; {
        chrome.scripting.executeScript({
            target: { tabId: tab.id },
            files: ['js/content.js']
        });
    });
    
let's start working on feature z</code></pre><p></p><h4>6. Prompt Engineering is an Iterative Process</h4><p>As I mentioned in a <a href="https://www.aidriven.dev/p/prompt-engineering-part-one-easy-wins">previous article</a>, there are some basic rules to prompt engineering, but the golden rule is learning as you go. The key is trying things out, seeing what works and what doesn't, and not being afraid to tweak your approach based on the AI's feedback.</p><p>So, definitely give the strategies I've shared a shot, but also stay flexible. Pay attention to how the conversation with the AI (ChatGPT or any other LLM) shifts with each change you make to your prompt. This back-and-forth will guide you to better results over time.</p><p></p><h4>Conclusion</h4><p>Wrapping things up, remember what I said at the start: this process isn't flawless or rock-solid. It wasn&#8217;t a magic snap of the fingers that got me the solution. It required hours of tweaking and testing to achieve what I initially envisioned. But ultimately, I managed to create a significant Chrome extension in a fraction of the time it would normally take. It's an experience that truly showcases the potential and excitement of blending AI with software development, regardless of whether you're an AI enthusiast or skeptic.</p><p>For more insights and stories like this, where we explore the fascinating intersection of AI and software development, make sure to subscribe to our newsletter. Stay ahead of the curve with us, as we dive into more adventures that blend the boundaries of technology and creativity.</p>]]></content:encoded></item><item><title><![CDATA[ Stack Overflow is here to stay: An Adaptation Through AI]]></title><description><![CDATA[The Evolution of a Developer's Best Friend]]></description><link>https://www.aidriven.dev/p/stack-overflow-is-here-to-stay-an</link><guid isPermaLink="false">https://www.aidriven.dev/p/stack-overflow-is-here-to-stay-an</guid><dc:creator><![CDATA[Mohamed]]></dc:creator><pubDate>Sun, 30 Jul 2023 13:02:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MEbD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MEbD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MEbD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MEbD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MEbD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MEbD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MEbD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:42925,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MEbD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!MEbD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!MEbD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!MEbD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04053d74-0418-4d51-9630-e163ed707366_1920x1080.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The rise of AI-powered tools like ChatGPT and GitHub Copilot has transformed the software development process. Developers are turning to these tools due to the convenience they offer and the significant boost in productivity they provide. They're able to automate routine tasks, speed up debugging, and reduce the time spent on searching for solutions to coding problems.</p><p>Despite these advancements, my shift to AI tools like ChatGPT and Github Copilot stirred mixed feelings. Their convenience and productivity boost were a game changer, but I also felt a hint of guilt as they began to replace Stack Overflow, my trusted ally in problem-solving. This blend of progress and nostalgia, excitement for the new and loyalty to the old, created a sense of conflict within me.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aidriven.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Driven Development ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This might sound overly emotional for a discussion about dev tools. However, it's only human to feel a bond with the platforms and communities that aid our growth. When these tools change, it's natural to experience a mix of emotions.</p><p>After all, Stack Overflow has been an invaluable asset to developers worldwide, myself included. The community of contributors who spent countless hours answering queries and aiding others in their coding journeys have provided insights that have shaped the landscape of software development.</p><p>Recently, however, there's been a noticeable dip in Stack Overflow traffic, as AI-driven solutions become more popular. But, as any good developer knows, adaptation is key to survival and growth in this ever-evolving digital world. That's why I was thrilled to hear about the recent announcements from Stack Overflow.</p><p>It was inevitable that Stack Overflow would embrace AI &#8212; but the question was always about how they would integrate it into their existing model. Now, we have the answer, and it's clear that Stack Overflow is ready to not only adapt but thrive in this new era</p><p></p><h2><strong>Why should we care?</strong></h2><p>Before we delve deeper into the implications of StackOverflow's shift towards AI, it's important to understand the crucial role it plays in software development. For years, StackOverflow has been the lifeline for developers worldwide, from seasoned professionals grappling with obscure issues to novices finding their feet in the coding world. It's a vast, dynamic repository of knowledge, crowd-sourced from millions of developers globally. Its loss would represent a significant knowledge gap in the software development landscape.</p><p>Furthermore, the collaborative, community-driven nature of StackOverflow is a testament to the principles of open source and shared learning. It's not just a question-and-answer site; it's a space that fosters the exchange of ideas, encourages mentorship, and creates a sense of community among developers. Therefore, ensuring its continuity and relevance in an AI-driven era is critical.</p><h3></h3><h2><strong>How is Stack Overflow adapting?</strong></h2><p>StackOverflow's journey to embracing AI is encapsulated in their OverflowAI initiative, a series of steps designed to enhance user experience and streamline information accessibility:</p><ol><li><p><strong>Semantic Search: </strong>StackOverflow is shifting from traditional lexical search to semantic search, a conversational, human-centered model. This improvement makes finding answers more intuitive, aligning the search results more closely with what users are researching.</p></li><li><p><strong>Stack Overflow for Teams: </strong>Stack Overflow is expanding its AI-powered semantic search to its Teams platform. Users will be able to find relevant answers faster and discover related knowledge from reliable sources, including Stack Overflow for Teams, public Stack Overflow, and other repositories like Confluence and GitHub.</p></li><li><p><strong>Enterprise knowledge ingestion: </strong>OverflowAI introduces enterprise knowledge ingestion to Stack Overflow for Teams. When starting a new instance or adding teammates, users can quickly curate a knowledge base using existing, trusted content. AI/ML will suggest the initial tags, questions, and answers, based on areas frequently asking for robust documentation or solutions. This feature not only saves time but ensures that valuable repository documents are discovered and utilized. All curated content, complete with quality indicators (votes, edits, comments, views, etc.), will be discoverable, reusable, and easy to update as the organization evolves.</p></li><li><p><strong>Slack Integration: </strong>Stack Overflow now integrates with Slack via the StackPlusOne chatbot, making knowledge from Stack Overflow for Teams readily accessible. This GenAI integration provides instant solutions to technical challenges, using data from both Teams instances and the broader Stack Overflow community. What's more, answers are presented in a conversational format, ensuring even less technical team members can understand them.</p></li><li><p><strong>IDE Extension for Visual Studio Code: </strong>StackOverflow is developing an IDE extension powered by OverflowAI, aiming to provide validated content from both the public platform and private Stack Overflow for Teams instance, right within the developer's coding workspace.</p></li><li><p><strong>AI Community Discussions: </strong>Stack Overflow is cultivating an AI-focused community on Stack Exchange. Users can discuss AI strategies, share ideas, and keep up with GenAI developments. They've also added 'Discussions' to their NLP Collective for technical debates and knowledge exchange.</p></li></ol><p></p><h2>Conclusion</h2><p>In conclusion, it's exciting to witness Stack Overflow adapt and evolve to meet the challenges posed by the rise of AI. But what do these changes mean for the wider developer community? At a basic level, they promise more streamlined information access and a boosted user experience. But on a larger scale, they could potentially transform the way knowledge is shared and learned within the developer community, fostering even greater collaboration and mutual support.</p><p>As for how Stack Overflow's new AI features stack up against existing AI tools like ChatGPT and GitHub Copilot, that remains to be seen. I'm optimistic, though. The integration of AI into Stack Overflow, a platform that has been pivotal to my own development and to millions of other developers, brings a sense of anticipation. I'm eager to see how it enhances our interactions with this invaluable resource.</p><p>For a firsthand look, check out the official announcement on Stack Overflow's blog <strong><a href="https://stackoverflow.blog/2023/07/27/announcing-overflowai/">here</a></strong>.</p><p></p><p>If you found this useful, please consider sharing and subscribing for more updates on software development and AI trends</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aidriven.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading AI Driven Development ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Prompt Engineering Series, Part One: Easy Wins in Prompt Crafting]]></title><description><![CDATA[Straightforward Approaches to Better Your Experience with LLM Prompts]]></description><link>https://www.aidriven.dev/p/prompt-engineering-part-one-easy-wins</link><guid isPermaLink="false">https://www.aidriven.dev/p/prompt-engineering-part-one-easy-wins</guid><dc:creator><![CDATA[Mohamed]]></dc:creator><pubDate>Sat, 17 Jun 2023 16:34:22 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080" width="3840" height="2160" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2160,&quot;width&quot;:3840,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a square object with a knot on it&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a square object with a knot on it" title="a square object with a knot on it" srcset="https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1684487747720-1ba29cda82f8?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHxjaGF0JTIwZ3B0fGVufDB8fHx8MTY4NzAxOTU0MHww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@maria_shalabaieva">Mariia Shalabaieva</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p>Hello and welcome back to <a href="http://www.aidriven.dev">AIDriven.Dev</a>. We continuously explore fascinating and innovative methods of integrating AI and Language Learning Models (LLMs) in software development.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aidriven.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">AI Driven Development  is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I&#8217;m excited to present Part One of our ongoing series - Prompt Engineering. This series focuses on the art and science of crafting prompts for LLMs, specifically the GPT family. Our goal? To help you dramatically enhance your LLM experiences through effective prompt crafting.</p><p>In this part, we're focusing on some 'easy wins' - basic, yet highly effective tips that will serve as a solid foundation for more advanced techniques we'll explore in future parts of this series.</p><p>It's worth mentioning that while most of these strategies should be effective across various models, my testing and validation have been primarily focused on OpenAI&#8217;s GPT-3.5 and GPT-4 models. So, let's dive in and start boosting your prompt crafting skills!</p><p><strong>NOTE: </strong>In this article, I'm sharing insights drawn from both my personal experiences with GPT models and valuable resources I've found along the way. Be sure to check out the list of references at the end of the article for more in-depth information.</p><p></p><h2>Why Prompt Engineering? And What Does It Mean for You as a Developer?</h2><p>When I first encountered the term "prompt engineering", I was slightly skeptical. The concept of "prompting" seemed very straightforward - after all, it's about guiding the LLM (like a GPT model) using natural language. However, as I dove deeper into its principles and guidelines, I realized it's not just about using natural language. Yes, you can still get decent results without adhering strictly to these guidelines, but if you're aiming for consistently good outcomes, these principles become indispensable.</p><p>You've probably seen lots of impressive AI applications that have sprung up in recent months. In most of these, prompt engineering plays a pivotal role. But what is prompt engineering, exactly?</p><p>Prompt engineering can be seen as the technique of carefully crafting instructions to get an ML model to do precisely what you want. Consider instruction-tuned LLMs like GPT. When you use an instruction-tuned LLM (such as GPT), think of giving&nbsp;instructions to another person, say someone that's smart that can do a lot of things but need careful instruction to do them well.</p><p>Imagine the potential of automating tasks that currently consume countless hours of your time, from drafting emails to generating code. It all boils down to formulating a well-structured prompt and letting the AI work its magic.</p><p>So, whether you're working with ChatGPT or using GPT APIs, it's vital to comprehend and master the art of writing effective prompts. If you're puzzled about why the model isn't producing the results or the output format you desire, it's probably the prompt that needs tweaking. Without further delay, let's delve into some easy wins - fundamental principles that will enhance your prompt writing skills.</p><p></p><h2>Basic Prompt Engineering Principles</h2><h4>Principle 1: Write clear and specific instructions</h4><p>This is the most fundamental and crucial principle of all. If you're aiming for good results, be detailed in your prompt. The more specifics you provide to the LLM about the task, the less likely you are to receive incorrect or irrelevant results. This principle really resonates with me. Consider your day-to-day job - if someone assigns you a task without clear and precise instructions, you'd probably do a lot of guesswork, potentially resulting in an outcome different from what was intended.</p><blockquote><p>GPTs can&#8217;t read your mind. If outputs are too long, ask for brief replies. If outputs are too simple, ask for expert-level writing. If you dislike the format, demonstrate the format you&#8217;d like to see. The less GPTs have to guess at what you want, the more likely you&#8217;ll get it -  <a href="https://platform.openai.com/docs/guides/gpt-best-practices/write-clear-instructions">OpenAI's GPT best practices</a></p></blockquote><p></p><p><strong>To ensure clarity and specificity in your instructions, consider employing these tactics:</strong></p><ol><li><p>Make your query detailed. For instance, "Who's president?" lacks clarity and specificity, while "Who was the president of Mexico in 2021, and how frequently are elections held?" provides a much clearer picture.</p></li><li><p>Use delimiters to distinctly indicate different parts of your input. Delimiters can be anything, such as: ```, """, &lt; &gt;, or &lt;tag&gt; &lt;/tag&gt;. Here's an example:</p><pre><code>Explain the code delimited by triple back-ticks:
```
Your code
```</code></pre></li><li><p>Ask for a structured output such as JSON, HTML ...etc</p><pre><code>Generate a list of three made-up book titles along \ 
with their authors and genres. 
Provide them in JSON format with the following keys: 
book_id, title, author, genre.</code></pre></li><li><p>Use &#8220;Few-Shot&#8221; prompting which is basically providing examples of successful executions of the task you want to be performed and ask the model to follow your lead</p><pre><code>Your task is to answer in a consistent style.

&lt;child&gt;: Teach me about patience.

&lt;grandparent&gt;: The river that carves the deepest \ 
valley flows from a modest spring; the \ 
grandest symphony originates from a single note; \ 
the most intricate tapestry begins with a solitary thread.

&lt;child&gt;: Teach me about resilience.</code></pre></li></ol><h4>Principle 2: Give the model time to think</h4><p>In software engineering, it's common practice to break down a large task into smaller, manageable sub-tasks. This approach also works well with LLMs, reducing the likelihood of errors due to rushed conclusions - a mistake humans might also make when dealing with complex tasks.</p><p><strong>Here are some tactics to help your model 'think' through tasks more effectively:</strong></p><ol><li><p>Specify the steps required to complete the task. For example, the following prompt would result in a better result than just giving a generic/short instruction</p><pre><code>Perform the following actions: 
1 - Summarize the following text delimited by triple backticks with 1 sentence.
2 - Translate the summary into French.
3 - List each name in the French summary.
4 - Output a json object that contains the following
keys: french_summary, num_names.

Separate your answers with line breaks.

Text:
```{text}```</code></pre></li></ol><p>I find myself applying this method quite frequently. I picked it up through a process of trial and error, initially making generic requests and then refining these prompts after observing what the model overlooked. This approach is especially beneficial when you're proficient at a task but find it tedious and would rather have the AI take it on. You get to guide the model through the necessary steps. Here's an example I use almost daily when working on my side projects using Python/Django stack. Bear in mind that it's not a perfect prompt. I usually have to tweak it a bit or provide more context as needed. However, it serves as a good illustration of being clear, specific and outlining the steps to complete the task.</p><pre><code>You are a very experienced Django developer AI. Your tech stack is Python 3.11, Django 4, PostgreSQL 13, TailwindCSS

You are working on project X whic does X. Your task is to build a new endpoint that should all user to do X. Follow th following steps:

1. Create the DB/Django model(s) and make sure to include fields: name, description...etc
2. Create the view function along with the URL new path/config and make sure authentication and authorization are considered
3. Create the view template (HTML template) and make sure to use tailwindCSS for styling
</code></pre><ol start="2"><li><p>Instruct the model to work out its own solution before rushing to a conclusion. This is somewhat similar to the first tactic, but here you're not providing the steps. Instead, you're asking the model to think through the task and come up with its own solution. Essentially, we're asking the AI to 'think out loud' as it tackles the task. Here's what an instance of this looks like:</p><pre><code>Your task is to determine if the student's solution \
is correct or not.
To solve the problem do the following:
- First, work out your own solution to the problem. 
- Then compare your solution to the student's solution \ 
and evaluate if the student's solution is correct or not. 
Don't decide if the student's solution is correct until 
you have done the problem yourself.

Question:
```
{question}
``` 
Student's solution:
```
{Student's solution}
```
Actual solution:</code></pre></li></ol><h4>Principle 3: Prompt engineering is an iterative process</h4><p>This is not a technique to apply to a prompt but something to know and keep in mind, prompt engineering is an iterative process that mirrors the evolution of ideas and implementations in machine learning model development. It's rare to conceive the perfect prompt for an application on the first attempt. </p><p>In the course of prompt engineering, you may start with an idea for a task you want the LLM to complete. You would then attempt to articulate this task in a clear and specific prompt. After running the prompt and analyzing the model's output, you may realize that your instructions weren't clear enough or that you didn't give the model ample time to "think." This feedback then informs your revisions to the prompt and helps refine your idea, creating a loop of continuous improvement until the prompt works ideally for your application.</p><h4>Principle 4: Be aware of the model&#8217;s limits</h4><p>Understanding the limitations of large language models (LLMs) is crucial when developing applications that leverage these models. Even though LLMs, such as GPT-4, have been trained on extensive data sets, they don't perfectly retain the information they've encountered. Consequently, they may struggle to accurately define the extent of their knowledge, sometimes generating responses about obscure or complex topics based on guesswork rather than factual information. These inaccuracies, known as "hallucinations," can result in the creation of plausible but untrue statements.</p><p>There are, however, strategies to mitigate these limitations. One effective method is to guide the model to draw upon relevant quotations from its input text when generating answers. This approach can help ground the model's responses in the provided material, reducing the likelihood of hallucinations.</p><p>I also find it beneficial not to entirely trust that the LLM will always produce the output as instructed. For instance, if you've asked the model to output in JSON format, it's crucial to validate that the output is in a valid JSON format and contains the specific JSON "keys" you required.</p><p></p><h2>Conclusion</h2><p>As we bring this first article in our 'Prompt Engineering' series to a close, I'd like to remind you that we've merely skimmed the surface of this fascinating topic. Today, we've touched upon some easy-win strategies that can immediately improve your interaction with Language Learning Models, but the world of prompt engineering is vast and we've got a lot more ground to cover.</p><p>Remember, these principles are just the tip of the iceberg. We've focused on basic yet highly effective techniques, but there are more advanced strategies waiting to be unveiled. We'll dive deeper into these in the subsequent parts of this series, so stay tuned.</p><p>I encourage you to refer to the resources section below for a deeper dive into these concepts. Not only does it provide more details, but it also gives you a broader perspective on the subject.</p><p>If you found this article valuable, consider subscribing to our newsletter. You'll be the first to know when we release the next article in this series. Moreover, if you think this information could benefit others, do share it among your network. After all, the more we share, the more we learn and grow together.</p><p>Thank you for joining me in this exploration of prompt engineering. I look forward to continuing this journey with you in the subsequent parts of the series. Until next time!</p><p></p><h1>References</h1><ol><li><p><a href="https://learn.deeplearning.ai/">ChatGPT Prompt Engineering for Developers</a></p></li><li><p><a href="https://platform.openai.com/docs/guides/gpt-best-practices">GPT best practices</a></p></li><li><p><a href="https://learnprompting.org/">https://learnprompting.org/</a></p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aidriven.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">AI Driven Development  is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI Driven Development: Build the thing that builds all the things]]></title><description><![CDATA[Exploring the Power of AI in Software Development: Unleashing Your Personal Jarvis]]></description><link>https://www.aidriven.dev/p/ai-driven-development-build-the-thing</link><guid isPermaLink="false">https://www.aidriven.dev/p/ai-driven-development-build-the-thing</guid><dc:creator><![CDATA[Mohamed]]></dc:creator><pubDate>Sat, 03 Jun 2023 21:58:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Oief!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hi there! Welcome to <a href="http://www.aidriven.dev">AIDriven.Dev </a>- a newsletter for developers and indie hackers. Here, we delve into interesting and inspiring ways people are integrating AI in software development.</p><p>From Copilot and Copilot X to ChatGPT, we've witnessed an unprecedented leap in AI capabilities in recent months. It's as if someone hit the fast-forward button on technology evolution. As we gallop through this AI-driven era, we, developers, must learn to ride the wave instead of being swept by it. With AIDriven.Dev, we aspire to be your trusty surfboard, helping you navigate these thrilling waters. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aidriven.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">AI Driven Development  is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>In this very first episode, we will talk about building your own Jarvis-like developer AKA your own junior developer agent</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Oief!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Oief!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Oief!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Oief!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Oief!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Oief!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg" width="504" height="504" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:500,&quot;width&quot;:500,&quot;resizeWidth&quot;:504,&quot;bytes&quot;:68096,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Oief!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Oief!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Oief!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Oief!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffa6a0ec4-35d6-4a00-86d6-a672e89b3eff_500x500.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Alright, I hear you. You're wondering if AI is here to help us or to take over. Remember, right now, these AI agents are excellent at churning out simple projects, not complex ones. And yes, the future could hold anything, but for now, the spotlight is on partnership, not replacement. I'm sharing these techniques because they're already being implemented in the software development scene. So, why not join the AI revolution and create your Jarvis-like AI assistant? It can tackle mundane and tedious projects in seconds or minutes - jobs that could have taken you days or even weeks to complete. Sounds tempting, doesn't it?</p><p></p><h3>Build the thing that builds the thing, build your own junior developer</h3><p>the idea is very simple: build a general-purpose junior dev agent that can build your projects. For example, you could build an agent that with few instructions can build your Chrome extension, Slack bot, or even a full SAAS app.  <br></p><blockquote><p><em>Stop building the thing.</em> <em>Build the thing that builds all the things.</em> IMO the most important thing every developer could be doing right now on nights and weekends is building a general purpose personal junior dev agent they can control and trust, that they can scale to fleets <a href="https://twitter.com/swyx">@swyx</a></p></blockquote><p></p><h2>Examples</h2><p>There are a lot of projects &#8212; mostly open-source &#8212; that have implemented that idea which you can utilize to build your own junior dev agent, here are some of them  </p><h4>1. Smol Developer:</h4><p>Github Repo: <a href="https://github.com/smol-ai/developer/">https://github.com/smol-ai/developer/</a></p><blockquote><p>with ~ 200 lines of Python code and with the help of GPT-4 (still in private Beta but you can use other models such as gpt-3.5-turbo, check the repo for more info), you can build your own junior developer that can develop, debug your  entire app codebase.</p></blockquote><p>There are lots of examples of how people are using smol-dev to build different apps so I recommend reading the repo <a href="https://github.com/smol-ai/developer/blob/main/readme.md">readme.md</a>. People have also forked the project to languages such as JS/TS, Go, and C# which can be found <a href="https://github.com/smol-ai/developer/tree/main#major-forksalternatives">here</a> <br></p><h4>2. e2b:</h4><p>Github Repo: <a href="https://github.com/e2b-dev/e2b">https://github.com/e2b-dev/e2b</a></p><blockquote><p>Debug, deploy, and monitor your AI agents - "Vercel for AI agents"</p></blockquote><p>e2b (english2bits) is similar to <a href="https://github.com/smol-ai/developer/">smol-dev </a>and it&#8217;s open-source with more features such as letting your Agents operate in our own secure sandboxed cloud environments that are powered by Firecracker. it also has a nice UI to interact with the agent</p><p></p><h4>3. AutoPR:</h4><p>Github Repo: <a href="https://github.com/irgolic/AutoPR">https://github.com/irgolic/AutoPR</a></p><blockquote><p>AutoPR autonomously writes pull requests in response to issues with ChatGPT.</p></blockquote><p>This is an interesting one and I myself was actually trying to build the same thing in which an AI agent listens to Github repo issues and once it&#8217;s assigned an issue it, creates a pull request with the fix.</p><p></p><h2>How do they work?</h2><p>At the heart of these projects lie the wizardry of cutting-edge AI tech (Large Language Models or LLMs for short), such as GPT-4, and the craft of prompt engineering. This topic deserves its own full-blown episode, and yes, we've got that coming up for you in the future. While we won't dive deep into the workings of each project, we can certainly share a sneak peek into the workings of the simplest one - smol-dev. And trust me, they all share a similar baseline.</p><p>Check out the source code of smol-dev; it's pretty straightforward and easy to grasp. The secret ingredient here (other than the brilliant GPT-4, which is a pro at spitting out code) is the clever way the prompt is crafted for the model. The trick is to steer the model in a way that helps dodge errors. This involves providing it with a comprehensive features list, refeeding any encountered errors back into the initial prompt to sidestep them in future runs, and my personal favorite - defining the "shared dependencies". These could be filenames or variable names, which ensures the model pops out a consistent codebase. And voila, that's pretty much it! A simple yet potent idea. I can't recommend enough that you should check out the repo and read through the source code.</p><p></p><h2>How could that help me as a developer?</h2><p>As a developer or an indie hacker, you can harness these AI-powered projects in two clear ways:</p><ol><li><p>Clone any of these projects and start tailoring your own agent. Put them to work and watch them create apps for you, especially the ones you find routine and time-consuming.</p></li><li><p>Dig deeper into the mechanics of these projects. Understand their prompt engineering techniques and inner workings. Draw inspiration and start building your own AI tools or, better yet, your very own 'thing that builds the things'.</p></li></ol><h2> Conclusion</h2><p>That's it for our first AIDriven.Dev issue! We hope it's stirred your interest in the blending of AI and software development. Whether you're eager to build your own AI assistant or excited to learn more about prompt engineering, there's plenty to look forward to.<br><br>Hungry for more insights and inspiring ways people use AI in software development? Make sure to subscribe to our newsletter</p><p>Until our next issue, keep innovating with code!</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.aidriven.dev/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">AI Driven Development  is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>