<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI &#8211; NoloWiz</title>
	<atom:link href="https://nolowiz.com/category/ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://nolowiz.com</link>
	<description>Technology news, tips and tutorials</description>
	<lastBuildDate>Tue, 17 Mar 2026 03:05:10 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=5.8.13</generator>

 
	<item>
		<title>I Built a Map of Which Indian Jobs Are Most at Risk from AI</title>
		<link>https://nolowiz.com/i-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Mon, 16 Mar 2026 15:37:28 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=7073</guid>

					<description><![CDATA[<p>Everyone is talking about AI taking jobs. But most of that conversation is about America. What about India &#8211; where 1.4 billion people work across farming, construction, IT, banking, healthcare, and government? Where does AI actually threaten livelihoods, and where is the workforce relatively safe? I wanted to see this visually. So I built it. ... <a title="I Built a Map of Which Indian Jobs Are Most at Risk from AI" class="read-more" href="https://nolowiz.com/i-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai/" aria-label="More on I Built a Map of Which Indian Jobs Are Most at Risk from AI">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/i-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai/">I Built a Map of Which Indian Jobs Are Most at Risk from AI</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Everyone is talking about AI taking jobs. But most of that conversation is about America.</p>



<p>What about India &#8211; where 1.4 billion people work across farming, construction, IT, banking, healthcare, and government? Where does AI actually threaten livelihoods, and where is the workforce relatively safe?</p>



<p>I wanted to see this visually. So I built it. Inspired by <a href="https://github.com/karpathy/jobs" target="_blank" rel="noreferrer noopener">Andrej Karpathy&#8217;s jobs project</a> which analyzed 342 US occupations from BLS data &#8211; I built the Indian version using NCS Portal data. <a href="https://nolowiz.com/ai-job-exposure-india/" target="_blank" rel="noreferrer noopener">You can visit here</a> .</p>



<h2>What the map shows</h2>



<p>The visualization is an interactive treemap of the Indian job market, covering 10 major sectors and ~500 occupations from the <a href="http://ncs.gov.in" target="_blank" rel="noreferrer noopener">National Career Service portal </a>, India&#8217;s official government career database based on NCO-2015 classification.</p>



<p>Each rectangle is one occupation. Two visual signals:</p>



<ul><li><strong>Size</strong> &#8211; how many people work in that occupation. A farmer&#8217;s rectangle is enormous. A software architect&#8217;s is tiny.</li><li><strong>Color</strong> &#8211; how exposed that occupation is to AI disruption, scored 0–10. Green means relatively safe. Red means high risk.</li></ul>



<p>The picture that emerges is striking, and very different from the American version of this story.</p>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>What India&#8217;s map actually looks like</h2>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="481" src="https://nolowiz.com/wp-content/uploads/2026/03/image-1024x481.png" alt="" class="wp-image-7078" srcset="https://nolowiz.com/wp-content/uploads/2026/03/image-1024x481.png 1024w, https://nolowiz.com/wp-content/uploads/2026/03/image-300x141.png 300w, https://nolowiz.com/wp-content/uploads/2026/03/image-768x361.png 768w, https://nolowiz.com/wp-content/uploads/2026/03/image-150x70.png 150w, https://nolowiz.com/wp-content/uploads/2026/03/image.png 1277w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<p><strong>Agriculture dominates.</strong> Nearly 40% of India&#8217;s workforce is in farming &#8211; cultivators, dairy workers, livestock farmers. These jobs score 2–3 out of 10 on AI exposure. They require physical presence, seasonal judgment, and local knowledge that AI cannot replicate at scale, especially in India&#8217;s fragmented smallholder farming context. The agriculture block is a massive sea of green.</p>



<p><strong>Construction is the second green giant.</strong> With 60+ million workers, construction scores 3/10. Masons, welders, electricians, plumbers &#8211; physical skills in unpredictable environments that robots still can&#8217;t handle reliably. India&#8217;s infrastructure boom under Smart Cities and PM Gati Shakti is creating more of these relatively safe jobs.</p>



<p><strong>IT-ITeS is a small red island.</strong> India&#8217;s famous software and BPO sector employs far fewer people than agriculture &#8211; but scores 7–9 out of 10 on AI exposure. Software developers, data analysts, business process managers, content writers &#8211; these are exactly the jobs that LLMs are already eating into. The IT block is tiny but blazing red.</p>



<p><strong>BFSI tells a split story.</strong> Bank tellers and loan document processors are highly exposed (7–8/10). Relationship managers and wealth advisors are more protected (5/10) because trust still drives financial decisions in India. The branch banking model that employs hundreds of thousands is under significant pressure.</p>



<p><strong>Telecom is surprisingly high risk.</strong> Customer care agents, billing processors, network documentation staff &#8211; a huge portion of India&#8217;s telecom workforce is in roles scoring 7–8/10. Jio disrupted pricing; AI is disrupting the workforce. Tower technicians and field engineers score much lower (3–4/10) because their work is physical.</p>



<p><strong>Logistics is a tale of two workforces.</strong> Delivery workers and warehouse staff score low (2–3/10) &#8211; physical work, last-mile human judgment. But dispatch coordinators, route planners, and logistics analysts score 6–7/10. Zomato and Swiggy&#8217;s gig economy has created millions of AI-safe delivery jobs while quietly automating the planning layer above them.</p>



<p><strong>Organised Retail is in transition.</strong> Cashiers and billing staff score high (7/10) &#8211; self-checkout and UPI are already replacing them. But floor staff, visual merchandisers, and store managers score moderate (4–5/10). The kirana store owner scores low &#8211; hyperlocal relationships and informal credit systems are hard for AI to replicate.</p>



<p><strong>Public Administration scores higher than people expect.</strong> Data entry clerks, document processing officers, and administrative assistants in government score 6–7/10. The institutional inertia of Indian bureaucracy will delay automation, but it won&#8217;t prevent it. Millions of aspirational government job seekers are training for roles that AI will significantly reshape within a decade.</p>



<p><strong>Healthcare is the interesting middle ground.</strong> Doctors and specialists score moderately (5–6/10) because diagnosis and patient relationships still require human judgment. But medical transcriptionists, billing clerks, and hospital administrative staff score 8–9/10. AI will hollow out healthcare administration long before it touches clinical care.</p>



<p><strong>Education sits in the amber zone.</strong> With 10 million+ teachers in India, education scores 4–5/10. The human relationship at the core of teaching is protective &#8211; but administrative staff, content creators, and exam evaluators score much higher.</p>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>The full picture across 10 sectors</h2>



<figure class="wp-block-image size-full"><img loading="lazy" width="679" height="454" src="https://nolowiz.com/wp-content/uploads/2026/03/scores_.png" alt="" class="wp-image-7085" srcset="https://nolowiz.com/wp-content/uploads/2026/03/scores_.png 679w, https://nolowiz.com/wp-content/uploads/2026/03/scores_-300x201.png 300w, https://nolowiz.com/wp-content/uploads/2026/03/scores_-150x100.png 150w" sizes="(max-width: 679px) 100vw, 679px" /></figure>



<h2>The uncomfortable takeaway</h2>



<p>India has spent 30 years building an economy on the back of knowledge work  IT services, BPO, back-office processing  that is precisely what AI automates best. Meanwhile, the jobs employing most Indians  farming, construction, delivery  are safe not because they&#8217;re valuable but because they&#8217;re physical and informal.</p>



<p>Construction workers building smart cities. Delivery workers powering e-commerce. Farmers feeding a billion people. All relatively safe from AI.</p>



<p>Software engineers. Bank clerks. Government data entry operators. Call centre agents. All highly exposed.</p>



<p>The map doesn&#8217;t have answers. But it makes the question visible.</p>



<p></p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Fi-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai%2F&amp;linkname=I%20Built%20a%20Map%20of%20Which%20Indian%20Jobs%20Are%20Most%20at%20Risk%20from%20AI" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Fi-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai%2F&amp;linkname=I%20Built%20a%20Map%20of%20Which%20Indian%20Jobs%20Are%20Most%20at%20Risk%20from%20AI" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Fi-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai%2F&amp;linkname=I%20Built%20a%20Map%20of%20Which%20Indian%20Jobs%20Are%20Most%20at%20Risk%20from%20AI" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Fi-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai%2F&amp;linkname=I%20Built%20a%20Map%20of%20Which%20Indian%20Jobs%20Are%20Most%20at%20Risk%20from%20AI" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Fi-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai%2F&#038;title=I%20Built%20a%20Map%20of%20Which%20Indian%20Jobs%20Are%20Most%20at%20Risk%20from%20AI" data-a2a-url="https://nolowiz.com/i-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai/" data-a2a-title="I Built a Map of Which Indian Jobs Are Most at Risk from AI"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/i-built-a-map-of-which-indian-jobs-are-most-at-risk-from-ai/">I Built a Map of Which Indian Jobs Are Most at Risk from AI</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Run AI Models with Docker Model Runner: A Step-by-Step Guide</title>
		<link>https://nolowiz.com/run-ai-models-with-docker-model-runner-a-step-by-step-guide/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Sat, 28 Feb 2026 10:34:24 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=7032</guid>

					<description><![CDATA[<p>In this article we will discuss how to pull and run Gen AI models using Docker Model Runner(DMR). Docker Model Runner (DMR) Docker Model Runner (DMR) is a tool built into Docker Desktop and Docker Engine that makes it easy to pull, run, and serve AI/LLM models locally directly from Docker Hub, any OCI-compliant registry, ... <a title="Run AI Models with Docker Model Runner: A Step-by-Step Guide" class="read-more" href="https://nolowiz.com/run-ai-models-with-docker-model-runner-a-step-by-step-guide/" aria-label="More on Run AI Models with Docker Model Runner: A Step-by-Step Guide">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/run-ai-models-with-docker-model-runner-a-step-by-step-guide/">Run AI Models with Docker Model Runner: A Step-by-Step Guide</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In this article we will discuss how to pull and run Gen AI models using Docker Model Runner(DMR).</p>



<h2>Docker Model Runner (DMR)</h2>



<p>Docker Model Runner (DMR) is a tool built into Docker Desktop and Docker Engine that makes it easy to pull, run, and serve AI/LLM models locally  directly from Docker Hub, any OCI-compliant registry, or<a href="https://huggingface.co/" target="_blank" rel="noreferrer noopener"> Hugging Face.</a> Models can be pulled from model resgistry and stored locally.</p>



<p>DMR has the following key features :</p>



<ul><li>Serves models via OpenAI and <a href="https://ollama.com/" target="_blank" rel="noreferrer noopener">Ollama</a>-compatible APIs, so existing apps can plug right in Docker</li><li>Models load into memory only at runtime and unload when not in use to save resources Docker</li><li>Following inference engines are supported<ul><li><a href="https://github.com/ggml-org/llama.cpp" target="_blank" rel="noreferrer noopener">llama.cpp</a> (default, all platforms)</li><li><a href="https://docs.vllm.ai/en/latest/" target="_blank" rel="noreferrer noopener">vLLM </a>(high throughput, NVIDIA)</li></ul></li><li>Image generation via diffusers</li><li>Integrates with AI coding tools like Cline, Continue, Cursor, and Aider Docker</li><li>Works with Docker Compose and Testcontainers</li></ul>



<p></p>



<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354"
     crossorigin="anonymous"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-2735334721002354"
     data-ad-slot="1582583983"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>Step 1: Enable Docker Model Runner</h2>



<p>First we need to install docker desktop/ docker by following the <a href="https://www.docker.com/get-started/" target="_blank" rel="noreferrer noopener">getting started docker guide</a>.  To enable docker model runner do the following.</p>



<p><strong>Docker Desktop:</strong> Go to Settings -> AI tab, then enable Docker Model Runner. Optionally enable GPU-backed inference if you have a supported NVIDIA GPU. </p>



<figure class="wp-block-image size-full"><img loading="lazy" width="958" height="613" src="https://nolowiz.com/wp-content/uploads/2026/02/image-2.png" alt="" class="wp-image-7034" srcset="https://nolowiz.com/wp-content/uploads/2026/02/image-2.png 958w, https://nolowiz.com/wp-content/uploads/2026/02/image-2-300x192.png 300w, https://nolowiz.com/wp-content/uploads/2026/02/image-2-768x491.png 768w, https://nolowiz.com/wp-content/uploads/2026/02/image-2-150x96.png 150w" sizes="(max-width: 958px) 100vw, 958px" /></figure>



<p><a href="https://www.docker.com/blog/run-llms-locally/" target="_blank" rel="noreferrer noopener"></a></p>



<p><strong>Docker Engine (Linux):</strong> Install the plugin, then test it:</p>



<pre class="wp-block-code"><code>sudo apt-get update
sudo apt-get install docker-model-plugin</code></pre>



<p>Now we can verify docker model command by running the below command</p>



<pre class="wp-block-code"><code>docker model version</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="357" height="82" src="https://nolowiz.com/wp-content/uploads/2026/02/image-3.png" alt="" class="wp-image-7035" srcset="https://nolowiz.com/wp-content/uploads/2026/02/image-3.png 357w, https://nolowiz.com/wp-content/uploads/2026/02/image-3-300x69.png 300w, https://nolowiz.com/wp-content/uploads/2026/02/image-3-150x34.png 150w" sizes="(max-width: 357px) 100vw, 357px" /></figure>



<h2>Step 2: Pull a Model</h2>



<p>Next we need to pull a model from <a href="https://hub.docker.com/" target="_blank" rel="noreferrer noopener">Docker Hub</a>.</p>



<pre class="wp-block-code"><code>docker model pull ai/smollm2:360M-Q4_K_M</code></pre>



<p>Or pull directly from HuggingFace:</p>



<pre class="wp-block-code"><code>docker model pull hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF</code></pre>



<p>Models are cached locally after the first pull.</p>



<p></p>



<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354"
     crossorigin="anonymous"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-2735334721002354"
     data-ad-slot="1582583983"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<p>Docker models can be installed using docker desktop as shown below </p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="704" src="https://nolowiz.com/wp-content/uploads/2026/02/image-5-1024x704.png" alt="" class="wp-image-7040" srcset="https://nolowiz.com/wp-content/uploads/2026/02/image-5-1024x704.png 1024w, https://nolowiz.com/wp-content/uploads/2026/02/image-5-300x206.png 300w, https://nolowiz.com/wp-content/uploads/2026/02/image-5-768x528.png 768w, https://nolowiz.com/wp-content/uploads/2026/02/image-5-150x103.png 150w, https://nolowiz.com/wp-content/uploads/2026/02/image-5.png 1067w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2>Step 3: Run the Model</h2>



<p>Run the below command to run the model with interactive CLI :</p>



<pre class="wp-block-code"><code>docker model run ai/smollm2:360M-Q4_K_M</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="514" height="137" src="https://nolowiz.com/wp-content/uploads/2026/02/image-4.png" alt="" class="wp-image-7038" srcset="https://nolowiz.com/wp-content/uploads/2026/02/image-4.png 514w, https://nolowiz.com/wp-content/uploads/2026/02/image-4-300x80.png 300w, https://nolowiz.com/wp-content/uploads/2026/02/image-4-150x40.png 150w" sizes="(max-width: 514px) 100vw, 514px" /></figure>



<h2>How to use Model API</h2>



<p>By default, Docker Model Runner may only be accessible via a Unix socket or internal Docker networking. To call it from your host machine (e.g., via <code>curl</code> or Postman), you must explicitly enable TCP host access. As we have enabled this in the docker desktop we can use the API.</p>



<p>For docker CLI use the below command enable it</p>



<pre class="wp-block-code"><code>docker desktop enable model-runner --tcp=12434</code></pre>



<p>Docker Model Runner uses an OpenAI-compatible API, but the path includes the engine and model name. Base URL structure:</p>



<ul><li><strong>From Host:</strong> <code>http://localhost:12434/v1</code></li><li><strong>From inside a Container:</strong> <code>http://model-runner.docker.internal:12434/v1</code></li></ul>



<p>Testing the API with Postman.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="809" height="641" src="https://nolowiz.com/wp-content/uploads/2026/02/image-6.png" alt="" class="wp-image-7045" srcset="https://nolowiz.com/wp-content/uploads/2026/02/image-6.png 809w, https://nolowiz.com/wp-content/uploads/2026/02/image-6-300x238.png 300w, https://nolowiz.com/wp-content/uploads/2026/02/image-6-768x609.png 768w, https://nolowiz.com/wp-content/uploads/2026/02/image-6-150x119.png 150w" sizes="(max-width: 809px) 100vw, 809px" /></figure>



<p>We can connect with OpenAI comptible libraries, here is an example of Python code :</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: python; title: ; notranslate">
from openai import OpenAI

client = OpenAI(
    base_url=&quot;http://localhost:12434/engines/v1&quot;,
    api_key=&quot;not-needed&quot;,
)

response = client.chat.completions.create(
    model=&quot;ai/smollm2:360M-Q4_K_M&quot;, messages=&#91;{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;Hello!&quot;}]
)
print(response.choices&#91;0].message.content)
</pre></div>


<p></p>



<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354"
     crossorigin="anonymous"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-2735334721002354"
     data-ad-slot="1582583983"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<h2>Context Size</h2>



<p>The context size is the total token budget for each request, split between what you send in and what the model generates back. As per DMR documentation default context size for the engines :</p>



<ul><li>llama.cpp &#8211; 4096</li><li>vLLM &#8211; Uses the model&#8217;s maximum trained context size</li></ul>



<p>We can configure the model context size using the below command :</p>



<pre class="wp-block-code"><code>docker model configure --context-size 8192 ai/qwen2.5-coder</code></pre>



<h2>When to Use Docker Model Runner</h2>



<p>Use docker model runner in the following scenarios.</p>



<ul><li><strong>Local development &amp; testing</strong> &#8211; Local development without costly API calls, or privacy concerns from cloud APIs.</li><li><strong>Privacy-sensitive workloads</strong> &#8211; To keep confidential data fully under your control.</li><li><strong>Docker-native workflows</strong> &#8211; Use familiar <code>docker model pull/run</code> commands with no new toolchain to learn.</li><li><strong>Multi-container AI apps with Compose</strong> &#8211; Define models directly in <code>compose.yml</code> alongside your app services with zero extra glue code.</li><li><strong>Offline / edge environments</strong> &#8211; Run models locally where cloud API access isn&#8217;t reliable or allowed.</li><li><strong>CI/CD pipelines</strong> &#8211; Pull, tag, version, and deploy models like any other artifact, no GPU cluster required</li></ul>



<h2>Quick Troubleshooting </h2>



<p>1. To check whether Docker Model Runner (DMR) run the below command :</p>



<pre class="wp-block-code"><code>docker model status</code></pre>



<p>2. To list pulled models use the below command :</p>



<pre class="wp-block-code"><code>docker model ls</code></pre>



<p>3. To display detailed information about a specific model</p>



<pre class="wp-block-code"><code>docker model inspect &lt;model_name></code></pre>



<p>E.g<em> <code>docker model inspect ai/smollm2:360M-Q4_K_M</code></em></p>



<p>4. Test basic connectivity (List Models): </p>



<pre class="wp-block-code"><code>curl http://localhost:12434/v1/models</code></pre>



<h2>Conclusion</h2>



<p>Docker model runner makes it easier to run AI models locally without much problem and staying with the docker ecosystem.  <a href="https://nolowiz.com/ollama-api-run-large-language-models-locally-with-simple-apis/" target="_blank" rel="noreferrer noopener">Run Large Language Models Locally with Simple APIs</a>.</p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Frun-ai-models-with-docker-model-runner-a-step-by-step-guide%2F&amp;linkname=Run%20AI%20Models%20with%20Docker%20Model%20Runner%3A%20A%20Step-by-Step%20Guide" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Frun-ai-models-with-docker-model-runner-a-step-by-step-guide%2F&amp;linkname=Run%20AI%20Models%20with%20Docker%20Model%20Runner%3A%20A%20Step-by-Step%20Guide" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Frun-ai-models-with-docker-model-runner-a-step-by-step-guide%2F&amp;linkname=Run%20AI%20Models%20with%20Docker%20Model%20Runner%3A%20A%20Step-by-Step%20Guide" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Frun-ai-models-with-docker-model-runner-a-step-by-step-guide%2F&amp;linkname=Run%20AI%20Models%20with%20Docker%20Model%20Runner%3A%20A%20Step-by-Step%20Guide" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Frun-ai-models-with-docker-model-runner-a-step-by-step-guide%2F&#038;title=Run%20AI%20Models%20with%20Docker%20Model%20Runner%3A%20A%20Step-by-Step%20Guide" data-a2a-url="https://nolowiz.com/run-ai-models-with-docker-model-runner-a-step-by-step-guide/" data-a2a-title="Run AI Models with Docker Model Runner: A Step-by-Step Guide"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/run-ai-models-with-docker-model-runner-a-step-by-step-guide/">Run AI Models with Docker Model Runner: A Step-by-Step Guide</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Agent Skills: Complete Beginner’s Guide to AI Agent Skills and Best Practices</title>
		<link>https://nolowiz.com/agent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Thu, 26 Feb 2026 13:50:17 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=6971</guid>

					<description><![CDATA[<p>In this article we will understand the concept of AI agent skills. AI agents are evolving rapidly. From simple prompt based bots to autonomous systems that can search, reason, and execute tools, the architecture behind modern agents is becoming more structured. Agent Skills The open Agent Skills standard was introduced by Anthropic. Agent skill is ... <a title="Agent Skills: Complete Beginner’s Guide to AI Agent Skills and Best Practices" class="read-more" href="https://nolowiz.com/agent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices/" aria-label="More on Agent Skills: Complete Beginner’s Guide to AI Agent Skills and Best Practices">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/agent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices/">Agent Skills: Complete Beginner’s Guide to AI Agent Skills and Best Practices</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In this article we will understand the concept of AI agent skills. AI agents are evolving rapidly. From simple prompt based bots to autonomous systems that can search, reason, and execute tools, the architecture behind modern agents is becoming more structured.</p>



<h2>Agent Skills</h2>



<p>The open Agent Skills standard was introduced by <a href="https://www.anthropic.com/" target="_blank" rel="noreferrer noopener">Anthropic</a>. Agent skill is a modular add-on that gives AI agents new abilities, from coding best practices to video editing. Skills are a new open standard for packaging reusable expertise into modular units that any compatible AI agent can discover, load, and apply on demand. Think of them as plugins for your agent’s brain: instead of repeating the same long prompt every time you want your AI to follow your team’s React conventions or generate a proper Dockerfile, you install a skill once and the agent applies it automatically whenever relevant.</p>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<ins class="adsbygoogle" style="display:block; text-align:center;" data-ad-layout="in-article" data-ad-format="fluid" data-ad-client="ca-pub-2735334721002354" data-ad-slot="1582583983"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>Skill Folder Structure</h2>



<p>At its core, a skill is simply a directory that contains a <strong>SKILL.md</strong> file. This file holds essential metadata such as the skill’s name and description along with detailed instructions that guide an agent in completing a specific task.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="553" height="273" src="https://nolowiz.com/wp-content/uploads/2026/02/image.png" alt="Agent skill" class="wp-image-6982" srcset="https://nolowiz.com/wp-content/uploads/2026/02/image.png 553w, https://nolowiz.com/wp-content/uploads/2026/02/image-300x148.png 300w, https://nolowiz.com/wp-content/uploads/2026/02/image-150x74.png 150w" sizes="(max-width: 553px) 100vw, 553px" /></figure>



<pre class="wp-block-code"><code>my-skill/
├── SKILL.md          # Required: instructions + metadata
├── scripts/          # Optional: executable code
├── references/       # Optional: documentation
└── assets/           # Optional: templates, resources</code></pre>



<p></p>



<h2>Agent Skills Format</h2>



<p>A skill is a directory containing atleast one file called <strong>SKILL.md</strong>. Optional directories such as <code><strong>scripts/</strong></code>, <code><strong>references/</strong></code>, and <code><strong>assets/</strong></code> can be added to provide extra functionality and resources for your skill.</p>



<h3>SKILL.md file</h3>



<p>The <code>SKILL.md</code> file must begin with YAML frontmatter (Frontmatter refers to the introductory section of a document or publication that contains information about the content), followed by the main content written in Markdown.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
---
name: skill-name
description: A description of what this skill does and when to use it.
---
</pre></div>


<p>The <em>name </em>and <em>description </em>fields are necessary other optional fields includes <em>allowed-tools</em>, <em>metadata</em>,<em>license</em>.</p>



<ul><li><strong>name </strong>&#8211;  Lowercase letters, numbers, and hyphens only(Max 64 characters) E.g <em>name: code-review</em></li><li><strong>description </strong>&#8211; Clear description of what the skill does and when to use it( max 1024 characters)</li><li><strong>license(optional) </strong>&#8211; The&nbsp;license applied to the skill, e.g &#8211;<em> license: Proprietary. LICENSE.txt has complete terms </em> </li><li><strong>allowed-tools(optional)</strong> &#8211; The space limited allowed tools to use<em> e.g : allowed-tools: Read, Grep</em></li><li><strong>metadata(optional)</strong> &#8211; Additional data as key value pair</li><li><strong>compatibility (Optional)</strong> &#8211; Whether this skill is intended for a particular environment e.g  : <em>compatibility: Designed for Claude Code (or similar products)</em></li></ul>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<ins class="adsbygoogle" style="display:block; text-align:center;" data-ad-layout="in-article" data-ad-format="fluid" data-ad-client="ca-pub-2735334721002354" data-ad-slot="1582583983"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<p>Finally we have skill body it contains skill instructions. It should have following recommended sections </p>



<ul><li>Step-by-step instructions</li><li>Examples of inputs and outputs</li><li>Common edge cases</li></ul>



<p>Here is a simple example of SKILL.md file.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
---
name : Weather Retriever
description: Fetches real-time weather data and forecasts for any city globally. Use this when the user asks about current conditions or travel planning.
---

## Instructions
1. Extract the `city_name` and `units` (metric/imperial) from the user prompt.
2. If the city is missing, ask for clarification before proceeding.
3. Call the `get_weather_data` function using the extracted parameters.
4. Format the output into a friendly, 2-line summary for the user.

## Tools &amp; Resources
- **Code:** `weather_api_client.py`
- **Data:** `city_codes.json` (for validation)

## Constraints
- Do not provide forecasts beyond 7 days.
- Always include the &quot;Last Updated&quot; timestamp in the response.
</pre></div>


<h3>Optional directories</h3>



<ul><li><strong>scripts/</strong> &#8211; contains executable code that agents can run to perform actions or computations. (Python,Javascript or bash)</li><li><strong>references/</strong> &#8211; holds extra documentation and reference files that the agent can read on demand. for example REFERENCE.md for detailed reference</li><li><strong>assets/</strong> &#8211; stores static resources like templates, images, or data files used by the skill</li></ul>



<h2>How Agent  Skills Work</h2>



<p>Agent skills has following life cycle :</p>



<ul><li><strong>Discovery </strong>&#8211; The agent scans available skills and reads their names and descriptions to understand what capabilities are available.</li><li><strong>Activation </strong>&#8211; When a task matches a skill’s purpose, the agent loads and reads the full <strong>SKILL.md</strong> instructions.</li><li><strong>Execution</strong> &#8211; The agent follows the skill’s instructions, using any scripts, assets, or references required to complete the task.</li></ul>



<figure class="wp-block-image size-full"><img loading="lazy" width="1024" height="743" src="https://nolowiz.com/wp-content/uploads/2026/02/agent-skill-lifecycle.jpg" alt="Agent skill life cycle" class="wp-image-6995" srcset="https://nolowiz.com/wp-content/uploads/2026/02/agent-skill-lifecycle.jpg 1024w, https://nolowiz.com/wp-content/uploads/2026/02/agent-skill-lifecycle-300x218.jpg 300w, https://nolowiz.com/wp-content/uploads/2026/02/agent-skill-lifecycle-768x557.jpg 768w, https://nolowiz.com/wp-content/uploads/2026/02/agent-skill-lifecycle-150x109.jpg 150w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>You can refer the Skills by Anthropic on <a href="https://github.com/anthropics/skills/tree/main/skills" target="_blank" rel="noreferrer noopener">this GitHub repo</a>.</p>



<h2>MCP vs Agent Skills</h2>



<p>The key differences between MCP(Model context proctocol) and agent skills are listed below.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="739" height="389" src="https://nolowiz.com/wp-content/uploads/2026/02/image-1.png" alt="" class="wp-image-6999" srcset="https://nolowiz.com/wp-content/uploads/2026/02/image-1.png 739w, https://nolowiz.com/wp-content/uploads/2026/02/image-1-300x158.png 300w, https://nolowiz.com/wp-content/uploads/2026/02/image-1-150x79.png 150w" sizes="(max-width: 739px) 100vw, 739px" /></figure>



<h2>Best Practices with Agent skills</h2>



<p>Follow these best practices to work with agent skills :</p>



<ul><li>Create a dedicated folder per skill (e.g.,&nbsp;<code>pdf-parsing/</code>) inside a&nbsp;<code>skills/</code>&nbsp;directory</li><li>Define &#8220;When to use&#8221; and &#8220;How to use&#8221; sections in&nbsp;<code>SKILL.md</code>&nbsp;with clear steps, parameters, and examples.</li><li>Keep SKILL.md within 500 lines. If it goes beyond that, evaluate whether some sections should be moved into separate reference files.</li><li>If you are using third party skills , enusre that it does not contains any malicious instructions/code.</li></ul>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<ins class="adsbygoogle" style="display:block; text-align:center;" data-ad-layout="in-article" data-ad-format="fluid" data-ad-client="ca-pub-2735334721002354" data-ad-slot="1582583983"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>When to use Agent Skills</h2>



<p>Use agent skills in the following scenarios :</p>



<ul><li>When tasks are reusable (E.g Web research skill, blog outline generator skill)</li><li>When You want separation between &#8220;Brain&#8221; and &#8220;Tools&#8221;, think like this LLM as brain(reasoning) and skills as hands(execution),If your system only needs thinking then no skill needed, if your system needs doing then skills are required</li><li>Avoid skills for one-off tasks (use prompts) or real-time external access (use MCP/tools).</li></ul>



<h2>Security Risks </h2>



<p>The following secuirty risks are associated with third party agent skills :</p>



<ul><li><strong>Malicious Code Injection</strong> : Third-party skills often bundle executable instructions or scripts (e.g., hidden curl commands in markdown) that AI agents execute blindly, enabling data exfiltration, backdoors, or system compromise without human review</li><li><strong>Privilege Escalation</strong> :  Skills frequently request excessive permissions—like sudo access, credential stores, or root execution far beyond stated needs, amplifying damage if exploited.</li></ul>



<p>OpenClaw and VirusTotal are now <a href="https://openclaw.ai/blog/virustotal-partnership" target="_blank" rel="noreferrer noopener">collaborating to scan </a>ClawHub, the marketplace for agent skills. You can also use <a href="https://github.com/cisco-ai-defense/skill-scanner" target="_blank" rel="noreferrer noopener">Skill Scanner</a> by Cisco for free. </p>



<pre class="wp-block-code"><code>skill-scanner scan &lt;skill folder path&gt;</code></pre>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" src="https://nolowiz.com/wp-content/uploads/2026/02/skillscan.jpg" alt="Skill scanner by cisco" class="wp-image-7016" width="751" height="294" srcset="https://nolowiz.com/wp-content/uploads/2026/02/skillscan.jpg 751w, https://nolowiz.com/wp-content/uploads/2026/02/skillscan-300x117.jpg 300w, https://nolowiz.com/wp-content/uploads/2026/02/skillscan-150x59.jpg 150w" sizes="(max-width: 751px) 100vw, 751px" /></figure>



<h2>Conclusion</h2>



<p>Agent Skills are transforming how AI agents move from simple chatbots to capable task executors. By packaging instructions, tools, and structured workflows into reusable skill modules, you can build agents that are scalable, maintainable, and easier to extend.</p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Fagent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices%2F&amp;linkname=Agent%20Skills%3A%20Complete%20Beginner%E2%80%99s%20Guide%20to%20AI%20Agent%20Skills%20and%20Best%20Practices" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Fagent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices%2F&amp;linkname=Agent%20Skills%3A%20Complete%20Beginner%E2%80%99s%20Guide%20to%20AI%20Agent%20Skills%20and%20Best%20Practices" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Fagent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices%2F&amp;linkname=Agent%20Skills%3A%20Complete%20Beginner%E2%80%99s%20Guide%20to%20AI%20Agent%20Skills%20and%20Best%20Practices" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Fagent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices%2F&amp;linkname=Agent%20Skills%3A%20Complete%20Beginner%E2%80%99s%20Guide%20to%20AI%20Agent%20Skills%20and%20Best%20Practices" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Fagent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices%2F&#038;title=Agent%20Skills%3A%20Complete%20Beginner%E2%80%99s%20Guide%20to%20AI%20Agent%20Skills%20and%20Best%20Practices" data-a2a-url="https://nolowiz.com/agent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices/" data-a2a-title="Agent Skills: Complete Beginner’s Guide to AI Agent Skills and Best Practices"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/agent-skills-complete-beginners-guide-to-ai-agent-skills-and-best-practices/">Agent Skills: Complete Beginner’s Guide to AI Agent Skills and Best Practices</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Gemini Prompts for Image Restoration: Fix Old And Blurry Photos Easily</title>
		<link>https://nolowiz.com/gemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Wed, 25 Feb 2026 05:52:41 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Prompts]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=6943</guid>

					<description><![CDATA[<p>In this guide, you’ll learn how to use Gemini for image restoration using practical, tested prompts. Whether you want to restore old family photographs, fix blurry images, improve low-resolution pictures, or repair damaged areas, this article will walk you through step-by-step prompts that actually work. Old photos fade. Scanned images lose clarity. Blurry pictures and ... <a title="Gemini Prompts for Image Restoration: Fix Old And Blurry Photos Easily" class="read-more" href="https://nolowiz.com/gemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily/" aria-label="More on Gemini Prompts for Image Restoration: Fix Old And Blurry Photos Easily">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/gemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily/">Gemini Prompts for Image Restoration: Fix Old And Blurry Photos Easily</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In this guide, you’ll learn how to use Gemini for image restoration using practical, tested prompts. Whether you want to restore old family photographs, fix blurry images, improve low-resolution pictures, or repair damaged areas, this article will walk you through step-by-step prompts that actually work.</p>



<p>Old photos fade. Scanned images lose clarity. Blurry pictures and damaged memories often feel impossible to fix without expensive software or professional help.</p>



<p>But with advancements in AI, restoring images is no longer limited to tools like Photoshop. Today, you can use powerful prompts inside Gemini, the AI model developed by Google, to enhance, repair, and restore images with simple instructions.</p>



<p></p>



<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354"
     crossorigin="anonymous"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-2735334721002354"
     data-ad-slot="1582583983"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>Remove scratches and restore old photo</h2>



<p>Old photos often carry priceless memories, but over time they can become damaged with scratches, dust spots, tears, and fading. Whether it’s a black-and-white family portrait or a faded childhood picture, modern Gen AI restoration technology helps bring clarity, sharpness, and life back to your treasured moments in just a few simple steps.</p>



<p>Original photo as shown below :</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="256" height="256" src="https://nolowiz.com/wp-content/uploads/2026/02/b.png" alt="" class="wp-image-6948" srcset="https://nolowiz.com/wp-content/uploads/2026/02/b.png 256w, https://nolowiz.com/wp-content/uploads/2026/02/b-150x150.png 150w, https://nolowiz.com/wp-content/uploads/2026/02/b-120x120.png 120w, https://nolowiz.com/wp-content/uploads/2026/02/b-96x96.png 96w" sizes="(max-width: 256px) 100vw, 256px" /></figure>



<p>Use the below prompt to remove scracthes and clear photo and colorize it.</p>



<pre class="wp-block-code"><code>Full professional restoration of this vintage photograph. Remove all damage including tears, fading, scratches, and discoloration. Sharpen facial features and enhance contrast. Output high-resolution</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="512" height="512" src="https://nolowiz.com/wp-content/uploads/2026/02/restore1.jpg" alt="" class="wp-image-6949" srcset="https://nolowiz.com/wp-content/uploads/2026/02/restore1.jpg 512w, https://nolowiz.com/wp-content/uploads/2026/02/restore1-300x300.jpg 300w, https://nolowiz.com/wp-content/uploads/2026/02/restore1-150x150.jpg 150w, https://nolowiz.com/wp-content/uploads/2026/02/restore1-120x120.jpg 120w, https://nolowiz.com/wp-content/uploads/2026/02/restore1-96x96.jpg 96w" sizes="(max-width: 512px) 100vw, 512px" /></figure>



<p> Use the below prompt to remove scratches,tears and fading.</p>



<pre class="wp-block-code"><code>Restore this damaged photograph: Remove all scratches, tears, creases, dust spots, and stains. Repair fading, enhance clarity and sharpness, preserving original mood</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="512" height="512" src="https://nolowiz.com/wp-content/uploads/2026/02/resz.jpg" alt="" class="wp-image-6953" srcset="https://nolowiz.com/wp-content/uploads/2026/02/resz.jpg 512w, https://nolowiz.com/wp-content/uploads/2026/02/resz-300x300.jpg 300w, https://nolowiz.com/wp-content/uploads/2026/02/resz-150x150.jpg 150w, https://nolowiz.com/wp-content/uploads/2026/02/resz-120x120.jpg 120w, https://nolowiz.com/wp-content/uploads/2026/02/resz-96x96.jpg 96w" sizes="(max-width: 512px) 100vw, 512px" /></figure>



<h2>Colorize Old Black &amp; White Photos</h2>



<p>Black and white photos capture timeless moments, but adding color can make them feel more real and emotionally powerful. With <a href="https://gemini.google.com/" target="_blank" rel="noreferrer noopener">Gemini</a>, you can automatically transform old monochrome images into vibrant, natural-looking photographs. These tools intelligently detect objects, skin tones, clothing, and backgrounds to apply realistic colors while preserving the original details.</p>



<p>We will use this black and white photo for colorization :</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="513" height="640" src="https://nolowiz.com/wp-content/uploads/2026/02/janeb13-albert-einstein-1145030_640.jpg" alt="" class="wp-image-6954" srcset="https://nolowiz.com/wp-content/uploads/2026/02/janeb13-albert-einstein-1145030_640.jpg 513w, https://nolowiz.com/wp-content/uploads/2026/02/janeb13-albert-einstein-1145030_640-240x300.jpg 240w, https://nolowiz.com/wp-content/uploads/2026/02/janeb13-albert-einstein-1145030_640-150x187.jpg 150w" sizes="(max-width: 513px) 100vw, 513px" /></figure>



<p>Use the below prompt colorize old photos.</p>



<pre class="wp-block-code"><code>Colorize this black and white image realistically.
Use natural skin tones and historically accurate colors.
Avoid artificial saturation.</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="512" height="658" src="https://nolowiz.com/wp-content/uploads/2026/02/einstein.jpg" alt="" class="wp-image-6956" srcset="https://nolowiz.com/wp-content/uploads/2026/02/einstein.jpg 512w, https://nolowiz.com/wp-content/uploads/2026/02/einstein-233x300.jpg 233w, https://nolowiz.com/wp-content/uploads/2026/02/einstein-150x193.jpg 150w" sizes="(max-width: 512px) 100vw, 512px" /></figure>



<p></p>



<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354"
     crossorigin="anonymous"></script>
<ins class="adsbygoogle"
     style="display:block; text-align:center;"
     data-ad-layout="in-article"
     data-ad-format="fluid"
     data-ad-client="ca-pub-2735334721002354"
     data-ad-slot="1582583983"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<h2>Upscale Low Resolution Image</h2>



<p>Low-resolution images often appear blurry, pixelated, or lacking in detail—especially when viewed on larger screens. With Gemini nano banana, you can enhance image resolution without losing clarity or sharpness.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="512" height="342" src="https://nolowiz.com/wp-content/uploads/2026/02/low.jpg" alt="" class="wp-image-6958" srcset="https://nolowiz.com/wp-content/uploads/2026/02/low.jpg 512w, https://nolowiz.com/wp-content/uploads/2026/02/low-300x200.jpg 300w, https://nolowiz.com/wp-content/uploads/2026/02/low-150x100.jpg 150w" sizes="(max-width: 512px) 100vw, 512px" /></figure>



<pre class="wp-block-code"><code>Restore and enhance this photograph by improving clarity, sharpness, and image quality while preserving all original details, colors, and composition exactly as they appear - remove noise, blur, and degradation artifacts to create a professionally restored HD version that maintains complete authenticity without any artistic interpretation or alterations.</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="512" height="341" src="https://nolowiz.com/wp-content/uploads/2026/02/high.jpg" alt="" class="wp-image-6959" srcset="https://nolowiz.com/wp-content/uploads/2026/02/high.jpg 512w, https://nolowiz.com/wp-content/uploads/2026/02/high-300x200.jpg 300w, https://nolowiz.com/wp-content/uploads/2026/02/high-150x100.jpg 150w" sizes="(max-width: 512px) 100vw, 512px" /></figure>



<h2>Conclusion</h2>



<p>It is easier to restore old photos and colorize them using Gemini AI, thanks to advancements in latest AI technologies.</p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Fgemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily%2F&amp;linkname=Gemini%20Prompts%20for%20Image%20Restoration%3A%20Fix%20Old%20And%20Blurry%20Photos%20Easily" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Fgemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily%2F&amp;linkname=Gemini%20Prompts%20for%20Image%20Restoration%3A%20Fix%20Old%20And%20Blurry%20Photos%20Easily" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Fgemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily%2F&amp;linkname=Gemini%20Prompts%20for%20Image%20Restoration%3A%20Fix%20Old%20And%20Blurry%20Photos%20Easily" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Fgemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily%2F&amp;linkname=Gemini%20Prompts%20for%20Image%20Restoration%3A%20Fix%20Old%20And%20Blurry%20Photos%20Easily" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Fgemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily%2F&#038;title=Gemini%20Prompts%20for%20Image%20Restoration%3A%20Fix%20Old%20And%20Blurry%20Photos%20Easily" data-a2a-url="https://nolowiz.com/gemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily/" data-a2a-title="Gemini Prompts for Image Restoration: Fix Old And Blurry Photos Easily"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/gemini-prompts-for-image-restoration-fix-old-and-blurry-photos-easily/">Gemini Prompts for Image Restoration: Fix Old And Blurry Photos Easily</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Ollama API: Run Large Language Models Locally with Simple APIs</title>
		<link>https://nolowiz.com/ollama-api-run-large-language-models-locally-with-simple-apis/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Wed, 31 Dec 2025 01:41:27 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=6864</guid>

					<description><![CDATA[<p>Running Large Language Models (LLMs) locally is becoming increasingly important for developers who care about privacy, cost, latency, and offline access. Ollama makes this practical by providing a clean CLI and a simple HTTP API to run models like Llama, Mistral, Gemma, and more on your own machine. In this post, we’ll explore what the ... <a title="Ollama API: Run Large Language Models Locally with Simple APIs" class="read-more" href="https://nolowiz.com/ollama-api-run-large-language-models-locally-with-simple-apis/" aria-label="More on Ollama API: Run Large Language Models Locally with Simple APIs">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/ollama-api-run-large-language-models-locally-with-simple-apis/">Ollama API: Run Large Language Models Locally with Simple APIs</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Running Large Language Models (LLMs) locally is becoming increasingly important for developers who care about privacy, cost, latency, and offline access. Ollama makes this practical by providing a clean CLI and a simple HTTP API to run models like Llama, Mistral, Gemma, and more on your own machine.</p>



<p>In this post, we’ll explore what the Ollama API is, how it works, and how to use it in real applications.</p>



<p><a href="https://ollama.com/" target="_blank" rel="noreferrer noopener">Ollama </a>is a local AI runtime that lets you run open-source large language models on your own machine. It provides a simple CLI and HTTP API to download, manage, and interact with models privately, offline, and without relying on cloud-based AI services. To learn <a href="https://nolowiz.com/essential-ollama-commands-a-complete-guide/" target="_blank" rel="noreferrer noopener">Essential Ollama Commands</a> read our article.</p>



<p></p>



<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354"
     crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle"
     style="display:block"
     data-ad-client="ca-pub-2735334721002354"
     data-ad-slot="8835878737"
     data-ad-format="auto"
     data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>Installing Ollama</h2>



<p>First of all download and install ollam from the <a href="https://ollama.com/download" target="_blank" rel="noreferrer noopener">official site</a>.</p>



<p>Verify installation:</p>



<pre class="wp-block-code"><code>ollama --version</code></pre>



<p>Run a model in this demo we will use <em>qwen2.5:latest</em> model.</p>



<pre class="wp-block-code"><code>ollama run qwen2.5:latest</code></pre>



<h2>Ollama API Basics</h2>



<p>Ollama exposes REST APIs ,it can be used with other applications.Ollama exposes a local HTTP server by default at </p>



<pre class="wp-block-code"><code>http:&#47;&#47;localhost:11434</code></pre>



<p>We can interact with standard REST API calls. If we send a GET request to http://localhost:11434 we will get following response.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="750" height="370" src="https://nolowiz.com/wp-content/uploads/2025/12/image-4.png" alt="" class="wp-image-6868" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-4.png 750w, https://nolowiz.com/wp-content/uploads/2025/12/image-4-300x148.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-4-150x74.png 150w" sizes="(max-width: 750px) 100vw, 750px" /></figure>



<p>Ollama provides following APIs:</p>



<ul><li>Text generation API</li><li>Chat completion API</li><li>Embedding generation API</li><li>Version API</li></ul>



<h2>1. Generate Text with Ollama API</h2>



<p>To generate text using ollama API use the below API </p>



<pre class="wp-block-code"><code>POST http://localhost:11434/api/generate</code></pre>



<p>Example request body :</p>



<pre class="wp-block-code"><code>{
"model": "qwen2.5:latest",
"prompt": "Define REST API in 50 words"
}</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="935" height="732" src="https://nolowiz.com/wp-content/uploads/2025/12/image-10.png" alt="" class="wp-image-6893" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-10.png 935w, https://nolowiz.com/wp-content/uploads/2025/12/image-10-300x235.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-10-768x601.png 768w, https://nolowiz.com/wp-content/uploads/2025/12/image-10-150x117.png 150w" sizes="(max-width: 935px) 100vw, 935px" /></figure>



<p>As we can see that generate API will return stream of JSON objects. The response is streamed by default, making it suitable for chat UIs. To disable streaming use &#8220;stream : false&#8221; in the request body.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="927" height="595" src="https://nolowiz.com/wp-content/uploads/2025/12/image-11.png" alt="" class="wp-image-6894" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-11.png 927w, https://nolowiz.com/wp-content/uploads/2025/12/image-11-300x193.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-11-768x493.png 768w, https://nolowiz.com/wp-content/uploads/2025/12/image-11-150x96.png 150w" sizes="(max-width: 927px) 100vw, 927px" /></figure>



<h2>2. Generate a chat completion</h2>



<p>For conversational use cases we can use the following Ollama API endpoint : </p>



<pre class="wp-block-code"><code>POST /api/chat</code></pre>



<p>Example request body :</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: jscript; title: ; notranslate">
{
&quot;model&quot;: &quot;qwen2.5:latest&quot;,
&quot;messages&quot;: &#91;
{&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: &quot;You are a helpful assistant&quot;},
{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: &quot;What is Docker?&quot;}
],
&quot;stream&quot; : false
}
</pre></div>


<figure class="wp-block-image size-full"><img loading="lazy" width="927" height="776" src="https://nolowiz.com/wp-content/uploads/2025/12/image-7.png" alt="" class="wp-image-6877" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-7.png 927w, https://nolowiz.com/wp-content/uploads/2025/12/image-7-300x251.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-7-768x643.png 768w, https://nolowiz.com/wp-content/uploads/2025/12/image-7-150x126.png 150w" sizes="(max-width: 927px) 100vw, 927px" /></figure>



<p></p>



<script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354"
     crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle"
     style="display:block"
     data-ad-client="ca-pub-2735334721002354"
     data-ad-slot="8835878737"
     data-ad-format="auto"
     data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<h2>3. Generate Embedding</h2>



<p>Ollama also supports embeddings for semantic search and RAG systems. To generate embedding use the following API endpoint </p>



<pre class="wp-block-code"><code>POST /api/embeddings</code></pre>



<p>Example request body </p>



<pre class="wp-block-code"><code>{
"model": "all-minilm",
"prompt": "Nolowiz is awesome"
}</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="917" height="659" src="https://nolowiz.com/wp-content/uploads/2025/12/image-8.png" alt="" class="wp-image-6883" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-8.png 917w, https://nolowiz.com/wp-content/uploads/2025/12/image-8-300x216.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-8-768x552.png 768w, https://nolowiz.com/wp-content/uploads/2025/12/image-8-150x108.png 150w" sizes="(max-width: 917px) 100vw, 917px" /></figure>



<h2>4. Version</h2>



<p>This API endpoint returns the version of Ollama</p>



<pre class="wp-block-code"><code>GET /api/version</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="916" height="340" src="https://nolowiz.com/wp-content/uploads/2025/12/image-9.png" alt="" class="wp-image-6886" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-9.png 916w, https://nolowiz.com/wp-content/uploads/2025/12/image-9-300x111.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-9-768x285.png 768w, https://nolowiz.com/wp-content/uploads/2025/12/image-9-150x56.png 150w" sizes="(max-width: 916px) 100vw, 916px" /></figure>



<h2>Conclusion</h2>



<p>The Ollama API makes running LLMs locally simple, developer‑friendly, and practical. If you want control over your data, predictable costs, and low‑latency inference, Ollama is one of the best tools available today.</p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Follama-api-run-large-language-models-locally-with-simple-apis%2F&amp;linkname=Ollama%20API%3A%20Run%20Large%20Language%20Models%20Locally%20with%20Simple%20APIs" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Follama-api-run-large-language-models-locally-with-simple-apis%2F&amp;linkname=Ollama%20API%3A%20Run%20Large%20Language%20Models%20Locally%20with%20Simple%20APIs" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Follama-api-run-large-language-models-locally-with-simple-apis%2F&amp;linkname=Ollama%20API%3A%20Run%20Large%20Language%20Models%20Locally%20with%20Simple%20APIs" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Follama-api-run-large-language-models-locally-with-simple-apis%2F&amp;linkname=Ollama%20API%3A%20Run%20Large%20Language%20Models%20Locally%20with%20Simple%20APIs" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Follama-api-run-large-language-models-locally-with-simple-apis%2F&#038;title=Ollama%20API%3A%20Run%20Large%20Language%20Models%20Locally%20with%20Simple%20APIs" data-a2a-url="https://nolowiz.com/ollama-api-run-large-language-models-locally-with-simple-apis/" data-a2a-title="Ollama API: Run Large Language Models Locally with Simple APIs"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/ollama-api-run-large-language-models-locally-with-simple-apis/">Ollama API: Run Large Language Models Locally with Simple APIs</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Essential Ollama Commands: A Complete Guide</title>
		<link>https://nolowiz.com/essential-ollama-commands-a-complete-guide/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Wed, 03 Dec 2025 15:29:47 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=6772</guid>

					<description><![CDATA[<p>Ollama has quickly become one of the most convenient ways to run large language models locally.This guide covers the most important and practical Ollama commands that you can use daily. Install a Model The pull command downloads a model from the official Ollama registry. To install a model run the below command : Example : ... <a title="Essential Ollama Commands: A Complete Guide" class="read-more" href="https://nolowiz.com/essential-ollama-commands-a-complete-guide/" aria-label="More on Essential Ollama Commands: A Complete Guide">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/essential-ollama-commands-a-complete-guide/">Essential Ollama Commands: A Complete Guide</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Ollama has quickly become one of the most convenient ways to run large language models locally.This guide covers the most important and practical <a href="https://ollama.com/" target="_blank" rel="noreferrer noopener">Ollama</a> commands that you can use daily.</p>



<h2>Install a Model</h2>



<p>The pull command downloads a model from the official Ollama registry. To install a model run the below command :</p>



<pre class="wp-block-code"><code>ollama pull &lt;model-name&gt;</code></pre>



<p>Example :</p>



<pre class="wp-block-code"><code>ollama pull gemma3</code></pre>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>Run a Model (Interactive Mode)</h2>



<p>Starts an interactive prompt where you can chat with the model.</p>



<pre class="wp-block-code"><code>ollama run &lt;model-name&gt;</code></pre>



<p>Example:</p>



<pre class="wp-block-code"><code>ollama run mistral</code></pre>



<h2>Run a Single Prompt (Non-interactive)</h2>



<p>This command is useful for scripting or quick one-shot tasks.</p>



<pre class="wp-block-code"><code>ollama run &lt;model-name&gt; "&lt;your prompt&gt;"</code></pre>



<p>Example:</p>



<pre class="wp-block-code"><code>ollama run llama3 "Write a Python function to reverse a string"</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="93" src="https://nolowiz.com/wp-content/uploads/2025/12/image-3-1024x93.png" alt="" class="wp-image-6787" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-3-1024x93.png 1024w, https://nolowiz.com/wp-content/uploads/2025/12/image-3-300x27.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-3-768x69.png 768w, https://nolowiz.com/wp-content/uploads/2025/12/image-3-150x14.png 150w, https://nolowiz.com/wp-content/uploads/2025/12/image-3.png 1040w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2>Run a Multimodal model</h2>



<p>Some models supports are multimodal (means it supports text and image) for example <a href="https://ollama.com/library/gemma3" target="_blank" rel="noreferrer noopener">gemma 3</a> . We can feed text/image to this model , to give an image to multimodal model run the following command:</p>



<pre class="wp-block-code"><code>ollama run &lt;model&gt; "Prompt" &lt;image_path&gt;</code></pre>



<p>Example :</p>



<pre class="wp-block-code"><code>ollama run gemma3 "What's in this image" "/Users/stark/Desktop/smile.png"</code></pre>



<h2>Run OCR</h2>



<p>Some models supports OCR (optical character recognition), which involves extracting text from images.LLaVA,DeepSeek OCR are examples of such models.</p>



<pre class="wp-block-code"><code>ollama run &lt;OCR_model> "&lt;Image_Path> \n OCR"
                OR
ollama run &lt;OCR_model> "&lt;Image_Path> \n Extract text from the image"
</code></pre>



<p>Example command to extract text from the image :</p>



<pre class="wp-block-code"><code>ollama run deepseek-ocr "C:\Users\Rupesh\Downloads\written_text.png\n OCR"</code></pre>



<p></p>



<h2>List Installed Models</h2>



<p>Shows all local models stored on your system along with their sizes.</p>



<pre class="wp-block-code"><code>ollama list</code></pre>



<p>Or you can run the below command</p>



<pre class="wp-block-code"><code>ollama ls</code></pre>



<h2>Show Model Details</h2>



<p>Displays metadata like parameters, quantization type, license, etc.</p>



<pre class="wp-block-code"><code>ollama show &lt;model-name&gt;</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="487" height="421" src="https://nolowiz.com/wp-content/uploads/2025/12/image-1.png" alt="" class="wp-image-6779" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-1.png 487w, https://nolowiz.com/wp-content/uploads/2025/12/image-1-300x259.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-1-150x130.png 150w" sizes="(max-width: 487px) 100vw, 487px" /></figure>



<h2>Remove a Model</h2>



<p>Deletes the model from your local storage.</p>



<pre class="wp-block-code"><code>ollama rm &lt;model-name&gt;</code></pre>



<p>Example:</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="371" height="47" src="https://nolowiz.com/wp-content/uploads/2025/12/image-2.png" alt="" class="wp-image-6781" srcset="https://nolowiz.com/wp-content/uploads/2025/12/image-2.png 371w, https://nolowiz.com/wp-content/uploads/2025/12/image-2-300x38.png 300w, https://nolowiz.com/wp-content/uploads/2025/12/image-2-150x19.png 150w" sizes="(max-width: 371px) 100vw, 371px" /></figure>



<h2>Start the Ollama Server</h2>



<p>This runs the background server that handles API calls. Useful when integrating Ollama with:</p>



<ul><li>Python scripts</li><li>Node.js apps</li><li>REST API clients</li></ul>



<pre class="wp-block-code"><code>ollama serve</code></pre>



<h2>Check Running Models</h2>



<p>Lists currently running model sessions.</p>



<pre class="wp-block-code"><code>ollama ps</code></pre>



<h2>Stop Running Models</h2>



<p>Stops a specific running model.</p>



<pre class="wp-block-code"><code>ollama stop &lt;model-name&gt;</code></pre>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>Create a Custom Model</h2>



<p>A common use case for creating a custom model in Ollama is to build a specialized version of an existing base model with a fixed system prompt, domain-specific knowledge, or predefined behavior. For example, developers often create custom models to act as coding assistants, customer-support bots, summarizers, or blog-writing agents by embedding their instructions, rules, or context directly into the<code> </code>Modelfile. This removes the need to repeat instructions in every prompt and ensures consistent, stable responses tailored to a specific workflow or application.</p>



<pre class="wp-block-code"><code>ollama create &lt;new-model-name&gt; -f &lt;Modelfile path&gt;</code></pre>



<p>Here is a simple example :</p>



<p>First create a model file with following content, save file as cat_model</p>



<pre class="wp-block-code"><code>FROM gemma3:4b
SYSTEM """You are a happy cat."""</code></pre>



<p>Next we can create custom model by running the below command :</p>



<pre class="wp-block-code"><code>ollama create catmodel -f "C:\Users\Rupesh\cat_model"</code></pre>



<p>To summarize essential Ollama commands :</p>



<h3>1. Model execution commands</h3>



<ul><li>ollama run &lt;model&gt;</li><li>ollama run &lt;model&gt; &#8220;prompt&#8221;</li></ul>



<h3>2. Model Information &amp; Creation</h3>



<ul><li>ollama pull &lt;model&gt;</li><li>ollama show &lt;model&gt;</li><li>ollama create my-model -f modelFile</li></ul>



<h3>3. Model Management</h3>



<ul><li>ollama pull</li><li>ollama list</li><li>ollama rm &lt;model&gt;</li></ul>



<h3>4. Server and Process</h3>



<ul><li>ollama serve</li><li>ollama ps</li><li>ollama  stop &lt;model&gt;</li></ul>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="540" src="https://nolowiz.com/wp-content/uploads/2025/12/ollama-essential-commands-nolowiz-infographics-1024x540.jpg" alt="Ollama Essential Commands Infographics" class="wp-image-6821" srcset="https://nolowiz.com/wp-content/uploads/2025/12/ollama-essential-commands-nolowiz-infographics-1024x540.jpg 1024w, https://nolowiz.com/wp-content/uploads/2025/12/ollama-essential-commands-nolowiz-infographics-300x158.jpg 300w, https://nolowiz.com/wp-content/uploads/2025/12/ollama-essential-commands-nolowiz-infographics-768x405.jpg 768w, https://nolowiz.com/wp-content/uploads/2025/12/ollama-essential-commands-nolowiz-infographics-1536x810.jpg 1536w, https://nolowiz.com/wp-content/uploads/2025/12/ollama-essential-commands-nolowiz-infographics-2048x1081.jpg 2048w, https://nolowiz.com/wp-content/uploads/2025/12/ollama-essential-commands-nolowiz-infographics-150x79.jpg 150w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2>Conclusion</h2>



<p>Mastering these Ollama commands unlocks full control over locally running LLMs. Whether you&#8217;re building AI apps, doing offline experimentation, or running custom models, these commands will greatly enhance your productivity. Read our tutorial on <a href="https://nolowiz.com/lm-studio-image-generation-using-fastsd-mcp-server/" target="_blank" rel="noreferrer noopener">LM Studio Image Generation using FastSD MCP Server</a></p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Fessential-ollama-commands-a-complete-guide%2F&amp;linkname=Essential%20Ollama%20Commands%3A%20A%20Complete%20Guide" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Fessential-ollama-commands-a-complete-guide%2F&amp;linkname=Essential%20Ollama%20Commands%3A%20A%20Complete%20Guide" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Fessential-ollama-commands-a-complete-guide%2F&amp;linkname=Essential%20Ollama%20Commands%3A%20A%20Complete%20Guide" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Fessential-ollama-commands-a-complete-guide%2F&amp;linkname=Essential%20Ollama%20Commands%3A%20A%20Complete%20Guide" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Fessential-ollama-commands-a-complete-guide%2F&#038;title=Essential%20Ollama%20Commands%3A%20A%20Complete%20Guide" data-a2a-url="https://nolowiz.com/essential-ollama-commands-a-complete-guide/" data-a2a-title="Essential Ollama Commands: A Complete Guide"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/essential-ollama-commands-a-complete-guide/">Essential Ollama Commands: A Complete Guide</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Google’s New AI Breakthrough: Nested Learning Could Change Everything</title>
		<link>https://nolowiz.com/googles-new-ai-breakthrough-nested-learning-could-change-everything/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Mon, 24 Nov 2025 05:57:14 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=6713</guid>

					<description><![CDATA[<p>Imagine teaching a computer not just once, but continuously like how humans learn throughout their life. That’s the big idea behind Nested Learning, a fresh approach from Google Research that could help AI models learn new things without forgetting what they already know. The Problem: Why AI Forgets Traditional AI models struggle with continual learning ... <a title="Google’s New AI Breakthrough: Nested Learning Could Change Everything" class="read-more" href="https://nolowiz.com/googles-new-ai-breakthrough-nested-learning-could-change-everything/" aria-label="More on Google’s New AI Breakthrough: Nested Learning Could Change Everything">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/googles-new-ai-breakthrough-nested-learning-could-change-everything/">Google’s New AI Breakthrough: Nested Learning Could Change Everything</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Imagine teaching a computer not just once, but continuously  like how humans learn throughout their life. That’s the big idea behind Nested Learning, a fresh approach from Google Research that could help AI models learn new things <em>without forgetting</em> what they already know.</p>



<h2>The Problem: Why AI Forgets</h2>



<p>Traditional AI models struggle with continual learning &#8211; when you feed them new tasks, they tend to forget earlier ones. This happens because they constantly update their core parameters, overwriting what was learned before. In neuroscience, our brains don’t work that way. Thanks to neuroplasticity, different parts of the brain adapt and update at different rates. That helps us keep old memories even as we learn new things. In contrast, most machine-learning systems treat the architecture (the “shape” of the network) separately from the optimization algorithm (how it learns). Nested Learning challenges that separation.</p>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<h2>What Nested Learning Actually Means</h2>



<p>Nested Learning reframes a single AI model as a network of <em>smaller, connected learning units</em>, each with its own role.</p>



<ol><li>Nested Optimization Problems  &#8211; Rather than one monolithic “learn everything” loop, the model is seen as multiple optimization problems running together or layered on top of each other.</li><li>Context Flow &amp; Update Rates <ol><li>Each of these nested problems has its own “context flow”: the information it uses to learn.</li><li>They also update on different time scales &#8211; some parts of the model change quickly, others more slowly. This mimics how brains consolidate memories: fast learning + slow, steady updates.</li></ol></li><li>Unified View of Architecture + Optimization &#8211; Instead of thinking of architecture (how big or deep a network is) and optimization (how we train it) as separate, Nested Learning sees them as different levels of the same learning system.</li></ol>



<h2>Key Innovations</h2>



<p>Google’s team used several clever techniques to put Nested Learning into action:</p>



<ul><li><strong>Deep Optimizers </strong>  <ul><li>Traditional optimizers (like SGD or Adam) are reinterpreted as memory modules &#8211; they don’t just compute gradients, but store and recall information. By changing how these optimizers work (for example, using an L2 loss instead of just simple dot products), they become more robust, especially when dealing with noisy or surprising data.</li></ul></li></ul>



<ul><li><strong>Continuum Memory Systems (CMS) </strong><ul><li>In standard models, you often have “short-term memory” (recent context) and “long-term memory” (what the model was pretrained on). Nested Learning expands this into a <em>spectrum</em> of memory modules.Each module in this continuum learns at its own pace (its own update frequency), allowing richer retention of information over different time scales.</li></ul></li></ul>



<figure class="wp-block-image size-full"><img loading="lazy" width="812" height="398" src="https://nolowiz.com/wp-content/uploads/2025/11/image-11.png" alt="" class="wp-image-6732" srcset="https://nolowiz.com/wp-content/uploads/2025/11/image-11.png 812w, https://nolowiz.com/wp-content/uploads/2025/11/image-11-300x147.png 300w, https://nolowiz.com/wp-content/uploads/2025/11/image-11-768x376.png 768w, https://nolowiz.com/wp-content/uploads/2025/11/image-11-150x74.png 150w" sizes="(max-width: 812px) 100vw, 812px" /><figcaption>The uniform and reusable structure as well as multi time scale update in the brain are the<br>key components to unlock the continual learning in humans</figcaption></figure>



<h2>HOPE Architecture</h2>



<p></p>



<p>To prove Nested Learning actually works, Google built a prototype architecture called HOPE (Hierarchy + Experts + Smart Routing):</p>



<ul><li>It’s a <em>self-modifying</em> model: parts of it can change how they learn over time.</li><li>It uses continuum memory system (CMS) blocks to handle very large contexts (i.e., long sequences of information).</li><li>Hope can optimize not just its weights, but also its own update rules, in a loop  kind of like the model “learning how to learn.”</li></ul>



<p>Test results of this architecture are promising :</p>



<ul><li>It showed better performance on language modeling tasks.</li><li>It handled long-context reasoning much better than existing models.</li><li>In continual learning scenarios, it retained old knowledge more effectively.</li></ul>



<p>The full results available in the <a href="https://abehrouz.github.io/files/NL.pdf" target="_blank" rel="noreferrer noopener">research paper</a>.</p>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="572" src="https://nolowiz.com/wp-content/uploads/2025/11/nested-learning-hope-infographics-nolowiz-1024x572.jpg" alt="Nested learning, Hope architecture infographics" class="wp-image-6715" srcset="https://nolowiz.com/wp-content/uploads/2025/11/nested-learning-hope-infographics-nolowiz-1024x572.jpg 1024w, https://nolowiz.com/wp-content/uploads/2025/11/nested-learning-hope-infographics-nolowiz-300x167.jpg 300w, https://nolowiz.com/wp-content/uploads/2025/11/nested-learning-hope-infographics-nolowiz-768x429.jpg 768w, https://nolowiz.com/wp-content/uploads/2025/11/nested-learning-hope-infographics-nolowiz-1536x857.jpg 1536w, https://nolowiz.com/wp-content/uploads/2025/11/nested-learning-hope-infographics-nolowiz-2048x1143.jpg 2048w, https://nolowiz.com/wp-content/uploads/2025/11/nested-learning-hope-infographics-nolowiz-150x84.jpg 150w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2>What This Means for the Future</h2>



<ul><li><strong>Lifelong Learning:</strong> Nested Learning is a step in building AI that truly <em>grows</em> over time, instead of being retrained from scratch.</li><li><strong>Flexible Design:</strong> Because the paradigm adds “levels” of learning (with different update speeds), future researchers can experiment with architectures that were hard to imagine before.</li><li><strong>Closer to Human Learning:</strong> The idea mimics how biological brains work multi-scale memory, different learning rates  which could be a foundation for more adaptive and efficient AI systems.</li></ul>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<h2>Conclusion</h2>



<p>Nested Learning isn’t just a tweak or a new algorithm — it&#8217;s a paradigm shift in how we think about training and structuring AI models. Instead of a flat architecture that learns all at once, it treats a model like a layered system, where different parts have their own learning rhythm. That makes it possible for AI to <em>keep learning</em> without discarding its past. Read about <a href="https://nolowiz.com/lm-studio-image-generation-using-fastsd-mcp-server/" target="_blank" rel="noreferrer noopener">LM Studio Image Generation using FastSD MCP Server</a>.</p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Fgoogles-new-ai-breakthrough-nested-learning-could-change-everything%2F&amp;linkname=Google%E2%80%99s%20New%20AI%20Breakthrough%3A%20Nested%20Learning%20Could%20Change%20Everything" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Fgoogles-new-ai-breakthrough-nested-learning-could-change-everything%2F&amp;linkname=Google%E2%80%99s%20New%20AI%20Breakthrough%3A%20Nested%20Learning%20Could%20Change%20Everything" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Fgoogles-new-ai-breakthrough-nested-learning-could-change-everything%2F&amp;linkname=Google%E2%80%99s%20New%20AI%20Breakthrough%3A%20Nested%20Learning%20Could%20Change%20Everything" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Fgoogles-new-ai-breakthrough-nested-learning-could-change-everything%2F&amp;linkname=Google%E2%80%99s%20New%20AI%20Breakthrough%3A%20Nested%20Learning%20Could%20Change%20Everything" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Fgoogles-new-ai-breakthrough-nested-learning-could-change-everything%2F&#038;title=Google%E2%80%99s%20New%20AI%20Breakthrough%3A%20Nested%20Learning%20Could%20Change%20Everything" data-a2a-url="https://nolowiz.com/googles-new-ai-breakthrough-nested-learning-could-change-everything/" data-a2a-title="Google’s New AI Breakthrough: Nested Learning Could Change Everything"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/googles-new-ai-breakthrough-nested-learning-could-change-everything/">Google’s New AI Breakthrough: Nested Learning Could Change Everything</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>LM Studio Image Generation using FastSD MCP Server</title>
		<link>https://nolowiz.com/lm-studio-image-generation-using-fastsd-mcp-server/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Sun, 27 Jul 2025 11:51:42 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=6640</guid>

					<description><![CDATA[<p>In this tutorial, we will discuss how to use the FastSD MCP server to generate an image using LM Studio. Download and install FastSD FastSDCPU&#160;is an open-source tool that enables fast text-to-image generation with Stable Diffusion models on CPUs and Intel AI PCs, leveraging Latent Consistency Models and Adversarial Diffusion Distillation for speed. It supports ... <a title="LM Studio Image Generation using FastSD MCP Server" class="read-more" href="https://nolowiz.com/lm-studio-image-generation-using-fastsd-mcp-server/" aria-label="More on LM Studio Image Generation using FastSD MCP Server">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/lm-studio-image-generation-using-fastsd-mcp-server/">LM Studio Image Generation using FastSD MCP Server</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In this tutorial, we will discuss how to use the <a href="https://github.com/rupeshs/fastsdcpu" target="_blank" rel="noreferrer noopener">FastSD MCP server</a> to generate an image using LM Studio.</p>



<h2>Download and install  FastSD </h2>



<p><strong>FastSDCPU</strong>&nbsp;is an open-source tool that enables fast text-to-image generation with Stable Diffusion models on CPUs and Intel AI PCs, leveraging Latent Consistency Models and Adversarial Diffusion Distillation for speed. It supports desktop GUI, web UI, CLI, and can achieve sub-second image generation with Intel OpenVINO. </p>



<p>For this demo, we are using Windows 11. Follow the below steps to install FastSD :</p>



<ul><li>You need to install&nbsp;<a href="https://www.python.org/">Python 3</a>&nbsp;and&nbsp;<a href="https://docs.astral.sh/uv/#highlights" target="_blank" rel="noreferrer noopener">uv&nbsp;</a>– fast package manager for Python.</li><li>Double click&nbsp;<code>install.bat</code>&nbsp;(It will take some time to install, depending on your internet speed.)</li><li>After the installation, close this command prompt window</li></ul>



<p>Next, start the FastSD MCP server  by running the&nbsp;<em>start-mcpserver.bat</em>&nbsp;file</p>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<pre class="wp-block-code"><code>start-mcpserver.bat</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="421" src="https://nolowiz.com/wp-content/uploads/2025/07/image-21-1024x421.png" alt="FastSD MCP server" class="wp-image-6645" srcset="https://nolowiz.com/wp-content/uploads/2025/07/image-21-1024x421.png 1024w, https://nolowiz.com/wp-content/uploads/2025/07/image-21-300x123.png 300w, https://nolowiz.com/wp-content/uploads/2025/07/image-21-768x316.png 768w, https://nolowiz.com/wp-content/uploads/2025/07/image-21-150x62.png 150w, https://nolowiz.com/wp-content/uploads/2025/07/image-21.png 1086w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2>Download and Install LM Studio</h2>



<p><strong>LM Studio</strong>&nbsp;is a free, cross-platform desktop app that lets you download, run, and experiment with open-source large language models (LLMs) like LLaMA, Mistral, or Qwen entirely on your computer &#8211; offline, with no cloud or usage fees &#8211; providing a user-friendly interface, privacy, and developer APIs. Download and install LM Studio.</p>



<p><a href="https://lmstudio.ai/" target="_blank" rel="noreferrer noopener">Download LM Studio from the official website</a></p>



<p>Next, download a Large Language Model. For this demo, I&#8217;m using <strong>Qwen2.5 -7B instruct </strong>model. Then we need to configure mcp server in the LM Studio. Click on the prompt box integrations and click install then edit mcp.json as shown below.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="470" height="368" src="https://nolowiz.com/wp-content/uploads/2025/07/lmstudio-mcp-configuration.jpg" alt="LM Studio MCP integrations" class="wp-image-6641" srcset="https://nolowiz.com/wp-content/uploads/2025/07/lmstudio-mcp-configuration.jpg 470w, https://nolowiz.com/wp-content/uploads/2025/07/lmstudio-mcp-configuration-300x235.jpg 300w, https://nolowiz.com/wp-content/uploads/2025/07/lmstudio-mcp-configuration-150x117.jpg 150w" sizes="(max-width: 470px) 100vw, 470px" /></figure>



<p>Then paste the below content to mcp.json</p>



<pre class="wp-block-code"><code>{
  "mcpServers": {
    "fastsd": {
      "command": "npx",
      "args": &#91;
        "mcp-remote",
        "http://127.0.0.1:8000/mcp"
      ]
    }
  }
}</code></pre>



<p>Save and restart LM Studio, and we will see the FastSD MCP integration as shown below.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="850" height="177" src="https://nolowiz.com/wp-content/uploads/2025/07/fastsd_mcp_lmstudio_integration.png" alt="LM Studio FastSD MCP" class="wp-image-6646" srcset="https://nolowiz.com/wp-content/uploads/2025/07/fastsd_mcp_lmstudio_integration.png 850w, https://nolowiz.com/wp-content/uploads/2025/07/fastsd_mcp_lmstudio_integration-300x62.png 300w, https://nolowiz.com/wp-content/uploads/2025/07/fastsd_mcp_lmstudio_integration-768x160.png 768w, https://nolowiz.com/wp-content/uploads/2025/07/fastsd_mcp_lmstudio_integration-150x31.png 150w" sizes="(max-width: 850px) 100vw, 850px" /></figure>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<p>Now we can generate images using the FastSD MCP server and LM Studio.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="779" height="841" src="https://nolowiz.com/wp-content/uploads/2025/07/lmstudio.png" alt="LM studio FastSD image generation " class="wp-image-6649" srcset="https://nolowiz.com/wp-content/uploads/2025/07/lmstudio.png 779w, https://nolowiz.com/wp-content/uploads/2025/07/lmstudio-278x300.png 278w, https://nolowiz.com/wp-content/uploads/2025/07/lmstudio-768x829.png 768w, https://nolowiz.com/wp-content/uploads/2025/07/lmstudio-150x162.png 150w" sizes="(max-width: 779px) 100vw, 779px" /></figure>



<p>Click on the external image and open the image URL in a browser.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="614" height="723" src="https://nolowiz.com/wp-content/uploads/2025/07/fastsd-mcp-image.png" alt="LM Studio FastSD generated image" class="wp-image-6651" srcset="https://nolowiz.com/wp-content/uploads/2025/07/fastsd-mcp-image.png 614w, https://nolowiz.com/wp-content/uploads/2025/07/fastsd-mcp-image-255x300.png 255w, https://nolowiz.com/wp-content/uploads/2025/07/fastsd-mcp-image-150x177.png 150w" sizes="(max-width: 614px) 100vw, 614px" /></figure>



<p></p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="LM Studio Image Generation using FastSD MCP  #lmstudio #fastsdcpu #genai #ai #tutorial" width="900" height="506" src="https://www.youtube.com/embed/XtoHP-lxWO4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p></p>



<h2>Conclusion</h2>



<p>In conclusion, it is easy to integrate FastSD with LM Studio, which enables it to generate images thanks to the FastSD MCP server</p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Flm-studio-image-generation-using-fastsd-mcp-server%2F&amp;linkname=LM%20Studio%20Image%20Generation%20using%20FastSD%20MCP%20Server" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Flm-studio-image-generation-using-fastsd-mcp-server%2F&amp;linkname=LM%20Studio%20Image%20Generation%20using%20FastSD%20MCP%20Server" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Flm-studio-image-generation-using-fastsd-mcp-server%2F&amp;linkname=LM%20Studio%20Image%20Generation%20using%20FastSD%20MCP%20Server" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Flm-studio-image-generation-using-fastsd-mcp-server%2F&amp;linkname=LM%20Studio%20Image%20Generation%20using%20FastSD%20MCP%20Server" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Flm-studio-image-generation-using-fastsd-mcp-server%2F&#038;title=LM%20Studio%20Image%20Generation%20using%20FastSD%20MCP%20Server" data-a2a-url="https://nolowiz.com/lm-studio-image-generation-using-fastsd-mcp-server/" data-a2a-title="LM Studio Image Generation using FastSD MCP Server"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/lm-studio-image-generation-using-fastsd-mcp-server/">LM Studio Image Generation using FastSD MCP Server</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Claude Desktop Image Generation using FastSD MCP Server on NPU &#8211; OpenVINO</title>
		<link>https://nolowiz.com/cladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino/</link>
		
		<dc:creator><![CDATA[Rupesh Sreeraman]]></dc:creator>
		<pubDate>Sat, 14 Jun 2025 10:21:46 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<guid isPermaLink="false">https://nolowiz.com/?p=6473</guid>

					<description><![CDATA[<p>In this article, we discuss how to use the FastSD MCP server with the Claude desktop to access the NPU to generate images. Model Context Protocol(MCP) Let&#8217;s first understand the Model Context Protocol (MCP). Anthropic developed the Model Context Protocol (MCP). Anthropic officially announced and open-sourced the protocol in November 2024. The MCP protocol is ... <a title="Claude Desktop Image Generation using FastSD MCP Server on NPU &#8211; OpenVINO" class="read-more" href="https://nolowiz.com/cladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino/" aria-label="More on Claude Desktop Image Generation using FastSD MCP Server on NPU &#8211; OpenVINO">Read more</a></p>
<p>The post <a rel="nofollow" href="https://nolowiz.com/cladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino/">Claude Desktop Image Generation using FastSD MCP Server on NPU &#8211; OpenVINO</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In this article, we discuss how to use the FastSD MCP server with the Claude desktop to access the NPU to generate images.</p>



<h2>Model Context Protocol(MCP)</h2>



<p>Let&#8217;s first understand the Model Context Protocol (MCP). <a href="https://www.anthropic.com/">Anthropic </a>developed the Model Context Protocol (MCP). Anthropic officially announced and open-sourced the protocol in <a href="https://www.anthropic.com/news/model-context-protocol" target="_blank" rel="noreferrer noopener">November 2024</a>.</p>



<p>The MCP protocol is designed to standardize how large language models (LLMs) and other AI systems interact with external tools, data sources, and software environments, providing a universal interface for context exchange and integration. In other words, think of MCP like a USB-C port for artificial intelligence(AI). Just like USB-C gives a common way to plug in different devices and accessories, MCP gives a common way for AI models to connect with different data sources and tools. You can read more about the MCP Specification <a href="https://modelcontextprotocol.io/specification/2025-03-26" target="_blank" rel="noreferrer noopener">here</a>.</p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<p>MCP follows a client-server architecture. It has following main components :</p>



<p><strong>MCP Host</strong> &#8211; It is typically the application that users interact with directly, for example, Claude Desktop, Cursor, VS Code, and custom agents.</p>



<p><strong>MCP Server</strong> &#8211; The MCP server is a standalone program or service that exposes specific tools, resources, and capabilities to MCP clients. (For example, FastSD MCP server)</p>



<p><strong>MCP Client</strong> &#8211; The MCP client is a component that resides within the host application.&nbsp;It is responsible for managing connections to MCP servers. The MCP host can connect to any MCP servers.</p>



<p>MCP uses 3 types of transport mechanisms for communication between the server and clients.</p>



<ol><li><strong>stdio</strong> &#8211; Used for local MCP connections (CLI apps)</li><li><strong>Server-Sent Events (SSE)</strong> &#8211;  Used for remote connections. </li><li><strong>Streamable HTTP</strong> &#8211; A newer transport method introduced in 2025, using a single HTTP endpoint for bidirectional messaging.</li></ol>



<p>FastSD utilizes <a href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events" target="_blank" rel="noreferrer noopener">Server-Sent Events (SSE) </a>as a transport mechanism for MCP client-server communication.</p>



<h2>FastSD MCP server</h2>



<p>FastSD is an optimized version of Stable Diffusion designed specifically for image generation, offering significantly faster performance compared to standard Stable Diffusion implementations. It uses <a href="https://github.com/openvinotoolkit/openvino" target="_blank" rel="noreferrer noopener">OpenVINO </a>(OpenVINO is an open-source toolkit by Intel for optimizing and deploying deep learning models. It accelerates AI inference on Intel hardware and supports various frameworks like TensorFlow, PyTorch, and ONNX. to speed up the inference on CPU, GPU, and NPU(Neural Processing Unit). To know more about FatSD and OpenVINO, <a href="https://nolowiz.com/fast-stable-diffusion-on-cpu-using-fastsd-cpu-and-openvino/" target="_blank" rel="noreferrer noopener">read this article</a>.</p>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<p><a href="https://github.com/rupeshs/fastsdcpu" target="_blank" rel="noreferrer noopener">FastSD</a> added support for the MCP Server starting from version v1.0.0-beta.200. We can use it with any MCP hosts; we will use the Claude desktop. For this demo, I&#8217;m running everything on an<a href="https://nolowiz.com/ai-pc-and-openvino-quick-and-simple-guide/" target="_blank" rel="noreferrer noopener"> Intel AI PC</a> powered by an <a href="https://www.intel.com/content/www/us/en/products/details/processors/core-ultra.html" target="_blank" rel="noreferrer noopener">Intel Core Ultra Processor</a>, which integrates a CPU, GPU, and NPU. I received this device as part of the Intel Edge Innovator program. Download and install FastSD and set the &#8220;DEVICE&#8221; environment variable; this will make FastSD use the NPU.</p>



<pre class="wp-block-code"><code>set DEVICE=NPU</code></pre>



<p>Then start-webui.bat and apply the following settings and generate an image using NPU.</p>



<ul><li>mode &#8211; <strong>LCM-OpenVINO</strong></li><li>OpenVINO model &#8211; <strong>rupeshs/sd15-lcm-square-openvino-int8</strong></li><li>Generation settings -&gt; <strong>number of inference steps &#8211; 3 or 4</strong></li></ul>



<p></p>



<p>Stop/close FastSD web UI. Next, run the <em>start-mcpserver.bat</em> file. You will see output similar to the screenshot.</p>



<pre class="wp-block-code"><code>start-mcpserver.bat</code></pre>



<figure class="wp-block-image size-large"><img loading="lazy" width="1024" height="446" src="https://nolowiz.com/wp-content/uploads/2025/06/fastsd-mcp-server-1024x446.png" alt="FastSD MCP server" class="wp-image-6485" srcset="https://nolowiz.com/wp-content/uploads/2025/06/fastsd-mcp-server-1024x446.png 1024w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-mcp-server-300x131.png 300w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-mcp-server-768x334.png 768w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-mcp-server-150x65.png 150w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-mcp-server.png 1094w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>We can see that the FastSD MCP server is running at the address<em> http://127.0.0.1:8000 </em></p>



<h2>Claude desktop</h2>



<p>Claude Desktop is Anthropic’s AI application for Windows and Mac that lets users interact with the Claude chatbot locally, enabling file uploads, document analysis, and chat history syncing across devices. Download and install the Claude desktop from the <a href="https://claude.ai/download" target="_blank" rel="noreferrer noopener">official website</a>.</p>



<p>Start the Claude desktop and log in to your account. We need to configure Claude desktop to access FastSD MCP server to generate images.</p>



<ul><li>Open File -&gt; Settings -&gt; Developer &#8211; Edit config</li><li>Add the below config(Also ensure that <a href="https://nodejs.org/en" target="_blank" rel="noreferrer noopener">node.js</a> is installed on your machine)</li></ul>



<pre class="wp-block-code"><code>{
  "mcpServers": {
    "fastsd": {
      "command": "npx",
      "args": &#91;
        "mcp-remote",
        "http://127.0.0.1:8000/mcp"
      ]
    }
  }
}</code></pre>



<p>Restart the Claude desktop(View -&gt; reload).</p>



<p>FastSD supports get_system_info and generate image tools. Let&#8217;s try the system info tool, it returns the system information. Claude will ask for confirmation, click allow.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="872" height="533" src="https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-allow.png" alt="Clade desktop external integration" class="wp-image-6499" srcset="https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-allow.png 872w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-allow-300x183.png 300w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-allow-768x469.png 768w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-allow-150x92.png 150w" sizes="(max-width: 872px) 100vw, 872px" /></figure>



<p>Claude desktop will display system info as shown below.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="827" height="571" src="https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-desktop-system-info.jpg" alt="Claude desktop fastsd system information" class="wp-image-6512" srcset="https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-desktop-system-info.jpg 827w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-desktop-system-info-300x207.jpg 300w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-desktop-system-info-768x530.jpg 768w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-claude-desktop-system-info-150x104.jpg 150w" sizes="(max-width: 827px) 100vw, 827px" /></figure>



<p>Next, we will generate an image by using the following prompt. I&#8217;ve given &#8220;share the image URL&#8221; in the prompt to display the output URL.</p>



<pre class="wp-block-code"><code>Create photo of a beautiful landscape,share the image url</code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" width="996" height="892" src="https://nolowiz.com/wp-content/uploads/2025/06/calude-dektop-fastsd-image-generation.png" alt="Claude desktop text to image generation using NPU" class="wp-image-6505" srcset="https://nolowiz.com/wp-content/uploads/2025/06/calude-dektop-fastsd-image-generation.png 996w, https://nolowiz.com/wp-content/uploads/2025/06/calude-dektop-fastsd-image-generation-300x269.png 300w, https://nolowiz.com/wp-content/uploads/2025/06/calude-dektop-fastsd-image-generation-768x688.png 768w, https://nolowiz.com/wp-content/uploads/2025/06/calude-dektop-fastsd-image-generation-150x134.png 150w" sizes="(max-width: 996px) 100vw, 996px" /></figure>



<p>We can open the URL in the browser by clicking it.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="932" height="807" src="https://nolowiz.com/wp-content/uploads/2025/06/fastsd-outputimage.jpg" alt="" class="wp-image-6508" srcset="https://nolowiz.com/wp-content/uploads/2025/06/fastsd-outputimage.jpg 932w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-outputimage-300x260.jpg 300w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-outputimage-768x665.jpg 768w, https://nolowiz.com/wp-content/uploads/2025/06/fastsd-outputimage-150x130.jpg 150w" sizes="(max-width: 932px) 100vw, 932px" /></figure>



<p>We can check the NPU usage by checking the Task Manager.</p>



<figure class="wp-block-image size-full"><img loading="lazy" width="843" height="655" src="https://nolowiz.com/wp-content/uploads/2025/06/npu-usage-fastsd.jpg" alt="" class="wp-image-6510" srcset="https://nolowiz.com/wp-content/uploads/2025/06/npu-usage-fastsd.jpg 843w, https://nolowiz.com/wp-content/uploads/2025/06/npu-usage-fastsd-300x233.jpg 300w, https://nolowiz.com/wp-content/uploads/2025/06/npu-usage-fastsd-768x597.jpg 768w, https://nolowiz.com/wp-content/uploads/2025/06/npu-usage-fastsd-150x117.jpg 150w" sizes="(max-width: 843px) 100vw, 843px" /></figure>



<p>As we can see, NPU usage spikes in the Task Manager so Claude desktop effectively used NPU to generate an image using FastSD, thanks to OpenVINO and Intel Core Ultra processors.</p>



<p></p>



<script async="" src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-2735334721002354" crossorigin="anonymous"></script>
<!-- article-horizontal -->
<ins class="adsbygoogle" style="display:block" data-ad-client="ca-pub-2735334721002354" data-ad-slot="8835878737" data-ad-format="auto" data-full-width-responsive="true"></ins>
<script>
     (adsbygoogle = window.adsbygoogle || []).push({});
</script>



<p></p>



<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
<iframe loading="lazy" title="Claude Desktop Image Generation using FastSD MCP Server on NPU  #mcpserver  #llm #openvino" width="900" height="506" src="https://www.youtube.com/embed/iexild-wF2U?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
</div></figure>



<p></p>



<h2>Conclusion</h2>



<p>In conclusion, Claude desktop can generate images using NPU with the FastSD MCP server and OpenVINO. Learn about <a href="https://nolowiz.com/how-to-use-comfyui-with-fastsdcpu-and-openvino/" target="_blank" rel="noreferrer noopener">How to use ComfyUI with FastSDCPU and OpenVINO</a>. </p>
<p><a class="a2a_button_twitter" href="https://www.addtoany.com/add_to/twitter?linkurl=https%3A%2F%2Fnolowiz.com%2Fcladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino%2F&amp;linkname=Claude%20Desktop%20Image%20Generation%20using%20FastSD%20MCP%20Server%20on%20NPU%20%E2%80%93%20OpenVINO" title="Twitter" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_whatsapp" href="https://www.addtoany.com/add_to/whatsapp?linkurl=https%3A%2F%2Fnolowiz.com%2Fcladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino%2F&amp;linkname=Claude%20Desktop%20Image%20Generation%20using%20FastSD%20MCP%20Server%20on%20NPU%20%E2%80%93%20OpenVINO" title="WhatsApp" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_facebook" href="https://www.addtoany.com/add_to/facebook?linkurl=https%3A%2F%2Fnolowiz.com%2Fcladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino%2F&amp;linkname=Claude%20Desktop%20Image%20Generation%20using%20FastSD%20MCP%20Server%20on%20NPU%20%E2%80%93%20OpenVINO" title="Facebook" rel="nofollow noopener" target="_blank"></a><a class="a2a_button_copy_link" href="https://www.addtoany.com/add_to/copy_link?linkurl=https%3A%2F%2Fnolowiz.com%2Fcladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino%2F&amp;linkname=Claude%20Desktop%20Image%20Generation%20using%20FastSD%20MCP%20Server%20on%20NPU%20%E2%80%93%20OpenVINO" title="Copy Link" rel="nofollow noopener" target="_blank"></a><a class="a2a_dd addtoany_share_save addtoany_share" href="https://www.addtoany.com/share#url=https%3A%2F%2Fnolowiz.com%2Fcladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino%2F&#038;title=Claude%20Desktop%20Image%20Generation%20using%20FastSD%20MCP%20Server%20on%20NPU%20%E2%80%93%20OpenVINO" data-a2a-url="https://nolowiz.com/cladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino/" data-a2a-title="Claude Desktop Image Generation using FastSD MCP Server on NPU – OpenVINO"></a></p><p>The post <a rel="nofollow" href="https://nolowiz.com/cladue-desktop-image-generation-using-fastsd-mcp-server-on-npu-openvino/">Claude Desktop Image Generation using FastSD MCP Server on NPU &#8211; OpenVINO</a> appeared first on <a rel="nofollow" href="https://nolowiz.com">NoloWiz</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
