{"id":4880,"date":"2026-03-07T17:01:28","date_gmt":"2026-03-07T17:01:28","guid":{"rendered":"https:\/\/www.creatingsmarthome.com\/?p=4880"},"modified":"2026-03-15T08:30:41","modified_gmt":"2026-03-15T08:30:41","slug":"from-cloud-to-local-supercharging-home-assistant-with-local-llms","status":"publish","type":"post","link":"https:\/\/www.creatingsmarthome.com\/index.php\/2026\/03\/07\/from-cloud-to-local-supercharging-home-assistant-with-local-llms\/","title":{"rendered":"From Cloud to Local: Supercharging Home Assistant with Local LLMs"},"content":{"rendered":"\n<p>LLMs are here to stay, and they are fundamentally changing how we interact with machines\u2014especially when it comes to smart homes and Home Assistant.<\/p>\n\n\n\n<p>In this article, I\u2019m walking you through my personal setup and how I use local LLMs with my Home Assistant instance. I\u2019ll also share the pitfalls to avoid and best practices for anyone looking to integrate LLMs into their own smart home. This post is a continuation of my previous article about <a href=\"https:\/\/www.creatingsmarthome.com\/index.php\/2025\/02\/27\/home-assistant-setting-up-home-assistant-voice-pe-and-using-it-in-native-language\/\">my experiences with the Home Assistant Voice PE<\/a>, so you might want to check that out first before diving in here.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Cloud vs. Local LLMs<\/h2>\n\n\n\n<p>Before going fully local, I used cloud-based LLMs for a while. While cloud models are a fantastic starting point, you eventually start wondering if things would be better running entirely on your own hardware.<\/p>\n\n\n\n<p>Here is a quick breakdown of why I made the switch:<\/p>\n\n\n\n<figure class=\"wp-block-table is-style-stripes\"><table class=\"has-fixed-layout\"><thead><tr><td>Feature<\/td><td>Cloud LLMs<\/td><td>Local LLMs<\/td><\/tr><\/thead><tbody><tr><td><strong>Performance<\/strong><\/td><td>Exceptional and fast (powered by massive remote servers).<\/td><td>Depends heavily on your local hardware (GPU).<\/td><\/tr><tr><td><strong>Privacy<\/strong><\/td><td>Your entities and data are sent to the cloud (often used for training).<\/td><td>100% private. Data never leaves your network.<\/td><\/tr><tr><td><strong>Reliability<\/strong><\/td><td>Dependent on your internet connection.<\/td><td>Works offline. If the internet drops, your house still listens.<\/td><\/tr><tr><td><strong>Cost<\/strong><\/td><td>Requires API tokens; fractions of a cent per query add up over time.<\/td><td>Higher upfront hardware cost, but free to query (minus electricity).<\/td><\/tr><tr><td><strong>Learning Experience<\/strong><\/td><td>Black-box setup; it&#8217;s plug-and-play, but you don&#8217;t get to see how the magic happens.<\/td><td>Hands-on tinkering; you learn exactly how LLMs are built, set up, and managed under the hood.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>For me, the API costs weren&#8217;t a dealbreaker, but I love experimenting and demand total local control over my smart home. So, I went all-in on local.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">My Local Hardware &amp; Setup<\/h2>\n\n\n\n<p>The goal for this build was to keep it as affordable and energy-efficient as possible. Almost all the parts (except the SSD) are recycled, either sourced from friends and family or bought from second-hand marketplaces.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Processor:<\/strong>&nbsp;Intel i7-9700T (only 35W TDP, 8-cores)<\/li>\n\n\n\n<li><strong>Memory:<\/strong>&nbsp;32GB DDR4<\/li>\n\n\n\n<li><strong>GPU:<\/strong>&nbsp;2x Nvidia RTX 3060 12GB<\/li>\n\n\n\n<li><strong>SSD:<\/strong>&nbsp;Samsung Evo 990 Pro 2TB<\/li>\n\n\n\n<li><strong>Motherboard:<\/strong>\u00a0MSI z370-a PRO<\/li>\n<\/ul>\n\n\n\n<p>The biggest hurdle when running a local LLM is the GPU. In my opinion, the absolute minimum VRAM required is 12GB. This allows you to run a decently sized model entirely in VRAM without falling back on system memory (swap). Once a model starts swapping, performance slows to an unusable crawl for a voice assistant. I found the Nvidia RTX 3060 12GB to be the perfect, affordable sweet spot to start experimenting.<\/p>\n\n\n\n<p>Initially, I only had one 3060. I eventually added a second to see if I could run larger models in parallel. While it technically works, larger models require more sheer compute power. The 3060 simply lacks the juice to process massive models quickly enough. When you ask Home Assistant to turn on the lights, you need a response within seconds, not minutes.<\/p>\n\n\n\n<p><strong>Power Draw:<\/strong>\u00a0The full setup idles at around 57W. When processing LLM commands, it can spike up to 400W. Not too bad, though running a server 24\/7\/365 will inevitably make a small dent in the power bill.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"723\" data-id=\"4943\" src=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/PXL_20260112_103154511.jpg\" alt=\"\" class=\"wp-image-4943\" srcset=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/PXL_20260112_103154511.jpg 960w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/PXL_20260112_103154511-300x226.jpg 300w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/PXL_20260112_103154511-768x578.jpg 768w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"723\" height=\"960\" data-id=\"4942\" src=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/PXL_20250926_142304149.jpg\" alt=\"\" class=\"wp-image-4942\" srcset=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/PXL_20250926_142304149.jpg 723w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/PXL_20250926_142304149-226x300.jpg 226w\" sizes=\"auto, (max-width: 723px) 100vw, 723px\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">The Software Stack<\/h2>\n\n\n\n<p>On the bare metal, I\u2019m running&nbsp;<strong>Proxmox<\/strong>, a free, open-source Linux virtualization platform with a massive community.<\/p>\n\n\n\n<p>Inside Proxmox, I have\u00a0<strong><a href=\"https:\/\/openwebui.com\" target=\"_blank\" rel=\"noreferrer noopener\">Open WebUI<\/a><\/strong>\u00a0running as an LXC (container). Open WebUI acts as a sleek web interface for controlling and managing the built-in <a href=\"https:\/\/ollama.com\" target=\"_blank\" rel=\"noreferrer noopener\">Ollama<\/a> instance. It makes managing models infinitely easier than dealing with plain Ollama via the command line.\u00a0<em>(I also run other virtual machines on this Proxmox node, but we\u2019ll save that for another article!)<\/em><\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"673\" data-id=\"4938\" src=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.25.14-1024x673.png\" alt=\"\" class=\"wp-image-4938\" srcset=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.25.14-1024x673.png 1024w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.25.14-300x197.png 300w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.25.14-768x505.png 768w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.25.14.png 1166w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"548\" data-id=\"4940\" src=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.26.43-1024x548.png\" alt=\"\" class=\"wp-image-4940\" srcset=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.26.43-1024x548.png 1024w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.26.43-300x160.png 300w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.26.43-768x411.png 768w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.26.43-1536x821.png 1536w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.26.43.png 1552w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n<\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">LLM Models Tested<\/h2>\n\n\n\n<p>I\u2019ve experimented with several free LLM models. Some are phenomenal, some struggle, and others outright fail.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><strong>Crucial Requirement:<\/strong>&nbsp;To work with Home Assistant, a model&nbsp;<strong>must<\/strong>&nbsp;support &#8220;tools&#8221; (function calling). This is the ability of an LLM to connect with external systems. Without tool support, Home Assistant cannot interact with the model.<\/p>\n<\/blockquote>\n\n\n\n<p>Here are the models I\u2019ve tested on my 12GB 3060 setup:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Qwen3:8b:<\/strong>&nbsp;A great base model for Home Assistant. It understands and responds well. The English is excellent, though it struggles with Finnish (my native language), often making spelling mistakes.<\/li>\n\n\n\n<li><strong>Qwen3:14b:<\/strong>\u00a0A little bit smarter than the 8b version. It runs great on my dual 3060 setup, though a single 3060 sometimes struggles. It shares the same Finnish language limitations.<\/li>\n\n\n\n<li><strong>gemma3-tools:12b:<\/strong>&nbsp;Google\u2019s free model, modified by the community to add tool support. It works decently but isn&#8217;t nearly as smart as Qwen3 for Home Assistant tasks.<\/li>\n\n\n\n<li><strong>Qwen3.5:9b:<\/strong>\u00a0I didn&#8217;t notice much of a difference compared to Qwen3:8b.<\/li>\n\n\n\n<li><strong>Mistral-small:22b:<\/strong>&nbsp;Unusably slow for Home Assistant on my specific hardware setup.<\/li>\n\n\n\n<li><strong>Deepseek-r1:12b:<\/strong>&nbsp;Felt less logical when interacting with Home Assistant and was too slow to provide the rapid outputs needed for voice control.<\/li>\n<\/ul>\n\n\n\n<p><strong>The Verdict:<\/strong>&nbsp;The clear winners for my setup are Qwen3 or Qwen3.5 (8b or 14b). While not perfect\u2014mostly due to the Finnish language barrier\u2014they are fast and understand context exceptionally well.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"362\" src=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.22.02-1024x362.png\" alt=\"\" class=\"wp-image-4936\" srcset=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.22.02-1024x362.png 1024w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.22.02-300x106.png 300w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.22.02-768x271.png 768w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.22.02-1536x542.png 1536w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.22.02-1920x678.png 1920w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Screenshot-2026-03-07-at-18.22.02.png 2028w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Home Assistant Scripts &amp; Blueprints<\/h2>\n\n\n\n<p>To get the most out of an LLM, you need to master exposing scripts to Home Assistant. The description you write for the script is essentially the &#8220;prompt&#8221; for the LLM. It tells the model exactly when to call the script and what parameters to pass.<\/p>\n\n\n\n<p>Here\u2019s a practical example using my favorite LLM-enabled script: my robot vacuum integration. This script commands my SwitchBot vacuum to clean or mop specific rooms based on casual conversation. <sub>*(this uses just recently published awesome <a href=\"https:\/\/github.com\/jaco\/switchbot-vacuum\/tree\/main\" target=\"_blank\" rel=\"noreferrer noopener\">switchbot-vacuum integration<\/a> that supports room specific cleaning)<\/sub> <\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"312\" src=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/assist_vacuum_cleaning-1024x312.jpg\" alt=\"\" class=\"wp-image-4933\" srcset=\"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/assist_vacuum_cleaning-1024x312.jpg 1024w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/assist_vacuum_cleaning-300x91.jpg 300w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/assist_vacuum_cleaning-768x234.jpg 768w, https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/assist_vacuum_cleaning.jpg 1138w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>vacuum_clean_rooms:\n  alias: Vacuum specific rooms\n  description: Commands the Switchbot robot vacuum to clean, vacuum, or mop the selected rooms.\n  fields:\n    rooms:\n      name: Rooms\n      description: &gt;-\n        List of rooms to be cleaned. Allowed values are ONLY: 'Elias Room', 'Venlas Room', 'Living Room', 'Master Bedroom', 'Utility Room', 'Kitchen', 'Entrance', 'Office'. Example: &gt;\n      required: true\n      selector:\n        object: {}\n    mode:\n      name: Cleaning mode\n      description: &gt;-\n        Cleaning mode. Select 'sweep' if the user asks to vacuum or clean. Select 'sweep_mop' if the user asks to mop.\n      required: true\n      default: sweep\n      selector:\n        select:\n          options:\n            - sweep\n            - mop\n            - sweep_mop\n  variables:\n    room_map:\n      'Elias Room': Elias Room\n      'Venlas Room': Venlas Room\n      'Living Room': Living Room\n      'Master Bedroom': Master Bedroom\n      'Utility Room': Utility Room\n      'Kitchen': Kitchen\n      'Entrance': Entrance\n      'Office': Office\n    mapped_rooms: &gt;\n      {% set ns = namespace(result=&#91;]) %}\n      {% set room_input = rooms | default(&#91;]) %}\n      {% set room_list = &#91;room_input] if room_input is string else room_input %}\n      {% for room in room_list %}\n        {% set mapped = room_map.get(room, room) %}\n        {% set ns.result = ns.result + &#91;mapped] %}\n      {% endfor %}\n      {{ ns.result }}\n  sequence:\n    - service: system_log.write\n      data:\n        message: \"Vacuum script triggered! Mode: {{ mode }} | Mapped Rooms: {{ mapped_rooms }} | Original Input: {{ rooms }}\"\n        level: info\n        logger: script.vacuum_clean_rooms\n    - service: switchbot_vacuum.clean_rooms\n      data:\n        rooms: \"{{ mapped_rooms }}\"\n        mode: \"{{ mode }}\"\n        water_level: 1\n        fan_level: 3\n      target:\n        device_id: 61aebd9cbb6de5013e26adc5d3d6c04d\n<\/code><\/pre>\n\n\n\n<p>In the main script description, I tell the LLM exactly\u00a0<em>what<\/em>\u00a0it does. In the parameter descriptions, I instruct the LLM on\u00a0<em>how<\/em>\u00a0to use the fields. The rest is just standard YAML magic\u2014like mapping my Finnish voice commands to the English room names set into the vacuum <sub>*(those mappings are in my own environment in Finnish, so in some cases that mapping could not even be needed)<\/sub>.<\/p>\n\n\n\n<p>There are also fantastic ready-made blueprints available online. Two of my favorites are the\u00a0<strong><a href=\"https:\/\/github.com\/music-assistant\/voice-support\/tree\/main\/llm-script-blueprint\" target=\"_blank\" rel=\"noreferrer noopener\">Music Assistant LLM Blueprint<\/a><\/strong>\u00a0(which lets you play media on any speaker via natural language) and the <strong><a href=\"https:\/\/github.com\/TheFes\/ha-blueprints\/tree\/main\/weather\" target=\"_blank\" rel=\"noreferrer noopener\">Weather Forecast Blueprint<\/a><\/strong>. The weather blueprint can be tricky because LLMs inherently struggle to know what the current date is without proper context injection, but it&#8217;s an excellent starting point that I highly recommend tweaking.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">MCP (Model Context Protocol)<\/h2>\n\n\n\n<p><a href=\"https:\/\/www.home-assistant.io\/integrations\/mcp_server\/\" target=\"_blank\" rel=\"noreferrer noopener\">Home Assistant supports MCP out of the box<\/a>. MCP allows your LLM agent to connect to external internet resources safely. By running an <a href=\"https:\/\/github.com\/sparfenyuk\/mcp-proxy\" target=\"_blank\" rel=\"noreferrer noopener\">MCP proxy<\/a>, I can give my local Home Assistant agent internet access.<\/p>\n\n\n\n<p>While I don&#8217;t use it constantly, it\u2019s a neat feature that allows me to ask my smart home for the overnight news or to check specific information on the web without leaving my local ecosystem.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Top Tips &amp; Pitfalls to Avoid<\/h2>\n\n\n\n<p>If you&#8217;re ready to build your own local LLM setup, keep these things in mind:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>VRAM is King:<\/strong>&nbsp;You need a GPU with at least 12GB of VRAM. Don&#8217;t go any lower. A 3090 24GB is optimal, but it will cost more upfront and draw significantly more power.<\/li>\n\n\n\n<li><strong>Don&#8217;t Expose Everything:<\/strong>&nbsp;Never expose all your Home Assistant entities to the assistant. More entities equal a larger context window, which slows down response times and increases the chance of the LLM hallucinating or executing the wrong command.<\/li>\n\n\n\n<li><strong>Experiment Frequently:<\/strong>&nbsp;With Open WebUI, swapping models is as easy as clicking a button. What works for my setup (and language) might not work for yours.<\/li>\n\n\n\n<li><strong>Embrace the Proxmox Learning Curve:<\/strong>&nbsp;Passing a GPU through to an LXC container isn&#8217;t always point-and-click, but once it\u2019s configured, it runs flawlessly. Since Proxmox 9.1, handling dependencies has gotten much easier. Lean on the community (or ask AI) if you get stuck!<\/li>\n\n\n\n<li><strong>Start with the Cloud:<\/strong>&nbsp;If you are unsure, try a cloud LLM first. Once you realize how powerful natural language control is, then make the leap to local hardware.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Final Thoughts<\/h2>\n\n\n\n<p>I absolutely love commanding my smart home in my native language. I no longer have to remember rigid catchphrases or exact device names. I can just talk naturally, and my house understands what I mean.<\/p>\n\n\n\n<p>It\u2019s not flawless yet, but setting this up truly makes your smart home feel like a piece of the future. Once you get started and the basics are set &#8211; rest is just to add for more features!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>LLMs are here to stay, and they are fundamentally changing how we interact with machines\u2014especially when it comes to smart homes and Home Assistant. In&hellip;<\/p>\n","protected":false},"author":1,"featured_media":4946,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[100,2],"tags":[422,482,476,7,474,477,481,475,478,479,480,8],"class_list":["post-4880","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-home-assistant","category-inspiration","tag-ai","tag-best-local-llm-for-home-assistant","tag-gpu","tag-home-assistant","tag-local","tag-local-llm","tag-mcp","tag-ollama","tag-open-webui","tag-proxmox","tag-qwen3","tag-smart-home","has-post-thumbnail-archive"],"acf":[],"featured_image_src":"https:\/\/www.creatingsmarthome.com\/wp-content\/uploads\/2026\/03\/Gemini_Generated_Image_vnq4nxvnq4nxvnq4.png","author_info":{"display_name":"Toni","author_link":"https:\/\/www.creatingsmarthome.com\/index.php\/author\/topsy\/"},"_links":{"self":[{"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/posts\/4880","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/comments?post=4880"}],"version-history":[{"count":46,"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/posts\/4880\/revisions"}],"predecessor-version":[{"id":4959,"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/posts\/4880\/revisions\/4959"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/media\/4946"}],"wp:attachment":[{"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/media?parent=4880"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/categories?post=4880"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.creatingsmarthome.com\/index.php\/wp-json\/wp\/v2\/tags?post=4880"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}