Troubleshooting OAK on macOS & The Power of Spatial AI


How a Web Browser Bricked My Computer Vision Project

(And a look at why this camera is worth the connection headache)

The Mystery of X_LINK_DEVICE_NOT_FOUND

If you're working with OAK cameras (Luxonis DepthAI) on macOS and are hitting your head against the wall with this error:

RuntimeError: Failed to find device after booting, error message: X_LINK_DEVICE_NOT_FOUND

Stop looking at your code. Look at your dock. Is Google Chrome open?

The Diagnosis: Modern browsers like Chrome use WebUSB to talk to hardware. On macOS, Chrome can aggressively seize an "exclusive lock" on the OAK device the moment it's plugged in. This lock prevents your Python script from accessing the device, causing it to fail during the boot handshake.

The Fix

  1. Quit Chrome Completely: Right-click the icon in the dock and select "Quit" (or CMD+Q). Closing the window isn't enough.
  2. Unplug and Re-plug: You must physically disconnect the device to clear the "zombie" lock held by the OS.
  3. Run Your Code: It should now connect immediately.

Once we did this, our connection speed jumped to SUPER (USB 3.0) and all examples started working.


What is this "OAK" Device Anyway?

You might be wondering—why go through this trouble for a webcam?

The OAK-D (OpenCV AI Kit - Depth) is not just a camera. It is a Spatial AI powerhouse. Unlike a standard webcam that just sends pixels to your computer, the OAK-D has a dedicated processor (Myriad X) on board.

Why Would Anyone Want One?

1. It Thinks for Itself (Edge AI)

Standard computer vision requires your computer's CPU/GPU to process every frame. The OAK-D runs the AI on the camera itself. This means:

  • Zero CPU Load: Your computer stays cool and fast while the camera does the heavy lifting (detecting faces, cars, defects).
  • Low Latency: Decisions are made instantly on the device.

2. It Sees in 3D (Spatial Awareness)

The "D" stands for Depth. With two stereo cameras (like human eyes), it calculates distance to every pixel. It doesn't just see "a person"—it sees "a person 3.5 meters away". This is critical for robotics (avoiding obstacles) and safety systems.

3. Privacy by Design

Because the AI processing happens on the device, you can build systems that only send metadata (e.g., "Person count: 5") rather than streaming video of people's faces to a cloud server.

Common Use Cases

  • Robotics: A robot that can navigate a room and identify objects without crashing.
  • Safety: Detecting if a worker is too close to heavy machinery (using the depth sensor).
  • Agriculture: Identifying ripe fruit and its precise 3D location for a robotic arm to pick.
  • Drones: Autonomous following and obstacle avoidance.

GCPLens: Engineering the Future of Cloud Monitoring


I recently deployed GCPLens, a futuristic, location-aware dashboard for Google Cloud. It visualizes cost, health, and usage across the globe with a premium design system.



Engineered with Precision

This project was built to solve the gap between abstract metrics and physical location. Here is the stack powering the experience:

  • Next.js 14: Using App Router & Server Components for speed.
  • Google Maps API: Deep integration with Advanced Markers for real-time visualization.
  • Tailwind CSS: A custom Glassmorphism design system.
  • Google Cloud Run: Containerized auto-scaling deployment.

The Engineering Challenges

Moving from local development to a production cloud environment required solving several key puzzles:

1. Build-Time Secret Management

Next.js aggressively bundles client-side code during the build phase. Integrating the Google Maps API key required passing it as a build argument (--build-arg) to Docker, ensuring it was baked into the static assets without exposing it in the repository.

2. Container Optimization

Standard Node containers failed health checks on Cloud Run. I optimized the Dockerfile to use Node 22-alpine and explicitly bound the Next.js server to 0.0.0.0, allowing the container orchestrator to properly route traffic.

3. UX & Hydration Stability

Integrating complex browser-only libraries like Google Maps with React Server Components caused initial hydration mismatches. I implemented strict boundary checks and deterministic data generation to ensure a butter-smooth initial load.

4. Mobile Browser Optimization

To ensure GCPLens is truly "location-aware" for field teams, I implemented a mobile-first navigation system. This included a floating glassmorphism bottom bar for touch-friendly transitions and responsive layout adjustments that prioritize critical KPIs and map visibility on smaller screens. Using Framer Motion, I added fluid animations to the mobile navigation to maintain the premium, high-tech feel even on the go.

Deployed live on Google Cloud Platform (us-central1).

Building Emo Narrator: Bringing Emotion to AI Voices - Eleven labs , AWS amplify , Next.Js , UI/UX

Building Emo Narrator: Bringing Emotion to AI Voices

I recently finished Emo Narrator, a full-stack Next.js application that uses ElevenLabs AI to turn plain text into emotionally charged narration. With a custom glassmorphism UI and an AWS Amplify backend, I let users choose between different voices and apply distinct emotional profiles to their speech. You can try the live app here: Emo Narrator.

The Technical Hurdle: Deploying Secrets on AWS Amplify

Every project has its “wall,” and for me it was the environment variable missing link. Local development was seamless, but once I deployed to AWS Amplify Hosting, the Next.js Server-Side Rendering (SSR) runtime wasn’t reliably picking up my ElevenLabs API key from the standard environment variable settings.

The "Secret Sauce" Fix:

I created a custom initialization script, inject-secrets.sh, which runs during the Amplify build phase. This script manually fetches the environment variable and writes it into a .env.production file:

#!/bin/bash

echo "Injected Secrets into .env.production"

echo "ELEVENLABS_API_KEY=$ELEVENLABS_API_KEY" >> .env.production

    

This ensures that my API route in /api/narrate always has the credentials it needs to communicate with the ElevenLabs servers in the deployed environment.

Tuning the Tones: Why the Weights Matter

To make the emotions feel real, I didn’t just guess values; I tuned the ElevenLabs voice_settings using three key parameters: stability, style, and similarity boost. Here’s how I think about the main presets:

  • Neutral (Stability: 0.75): High stability keeps the delivery consistent, calm, and professional for standard information.
  • Excited / Angry (Stability: 0.25, Style: 0.9): I drop stability significantly and push style close to max. That “unleashes” the model, letting it produce sharper tonal spikes and a wider expressive range that better matches high-intensity emotions.
  • Whisper (Stability: 0.4, Style: 1.0): I maximize style and pair it with about a 0.85x playback rate on the frontend, which stretches out the breathy qualities of the voice and creates an eerie, intimate atmosphere.

Voice Limitations

On the free tier, ElevenLabs limits how many active voices I can use concurrently. To stay within those constraints and still keep variety, I chose four versatile voices: Rachel, Dominic, Bella, and Antoni. Together they give a good balance of gender and character for different narration needs.

If you’d like to hear these emotional layers in action, you can experiment with the live app here: https://main.d11c506w0cprcx.amplifyapp.com/.

LunchDeals AI - AI App using React, Typescript, Tailwind CSS, Express, Webcrawl AIs and Antigravity

Created with React, TypeScript, Tailwind CSS, Express, and web crawl AIs.

LunchDeals AI: Never Pay Full Price for Lunch Again 🍔💸

Link to my AWS Amplify-based app (frontend only) – APP LINK

Have you ever found yourself at 12:30 PM, hungry and staring at the same three overpriced sandwich shops near your office? You want a deal, but who has time to search through five different apps just to save $2? That’s why I created LunchDeals AI—so you never have to overpay for lunch again.

What is LunchDeals AI?

LunchDeals AI is a real-time discovery engine designed to help you find the best lunch specials and discounts within a 15-mile radius of your current location. No more expired coupons or hidden fees—just fresh, local deals delivered instantly.

How It Works

The secret is in the technology. LunchDeals AI uses proprietary web crawling AI to scrape real-time data from restaurant coupon platforms, local delivery apps, and third-party aggregators. Unlike static databases that quickly become outdated, LunchDeals AI finds deals happening right now.

Key Features

  • 📍 Precision GPS Integration: Automatically detects your location to show deals closest to you.
  • 🔥 Real-Time Scraping: Aggregates deals expiring today so you never miss a flash sale.
  • 🥗 Personalized Filtering: Filter by cuisine, dietary needs (vegan, vegetarian, gluten-free), and price point (preferred under $15).
  • 🗺️ Map & List Views: Switch between a sleek grid view or an interactive map with one-click directions.
  • ⚡ Built for Speed: High-performance React + Vite frontend, backed by an Express server for lightning-fast results.

The Design

I wanted the experience to be as appetizing as the food itself. The UI features a vibrant, modern aesthetic with glassmorphism effects, smooth micro-animations, and a vibe that makes finding a deal feel like a win, not a chore.

What's Next?

This is just the beginning. Upcoming features include push notifications for flash deals within two miles and battery-efficient background updates, so you’ll get alerts the moment a deal near you goes live.

Stop overpaying for lunch. Try LunchDeals AI and let me know what you think.

Created with React, TypeScript, Tailwind CSS, Express, and Firecrawl AI.

Embracing Ephemeralization: Paving the Path to a Brighter Future

Introduction:

In our relentless pursuit of progress, humanity's insatiable ambition to create better technology and improve life on Earth has led us down a path of innovation and discovery. One remarkable concept that aligns seamlessly with this drive is "Ephemeralization," a term introduced by the visionary thinker, R. Buckminster Fuller. At the core of ephemeralization lies our collective aspiration to do "more with less," revolutionizing the way we interact with technology and envision a brighter future for ourselves and the planet.


Defining Ephemeralization:

Ephemeralization embodies the very essence of human ingenuity and adaptability. It is the remarkable ability of technology to achieve greater efficiency while utilizing fewer resources, marking an unprecedented shift in our approach to problem-solving and sustainable living. As we aim to create a better world, ephemeralization becomes a beacon of hope, driving us to overcome the challenges that lie ahead.


Unleashing Human Potential:

The history of human progress is woven with tales of ingenuity and creativity. From the invention of the wheel to groundbreaking space exploration, our desire to do more with less has propelled us forward. Ephemeralization has been instrumental in unleashing human potential, empowering us to transcend limitations and push the boundaries of possibility.


Environmental Sustainability:

As we stand at a crucial crossroads in the face of environmental challenges, ephemeralization offers a lifeline for our planet. By harnessing cutting-edge technology and adopting sustainable practices, we can create eco-friendly solutions that minimize waste, conserve resources, and mitigate our impact on the environment. From renewable energy sources to efficient transportation systems, ephemeralization serves as a catalyst for achieving a greener and more sustainable future.


Social Impact:

Beyond its environmental implications, ephemeralization holds immense promise for social progress. Advancements in technology have transformed communication, healthcare, education, and accessibility, opening doors for improved quality of life and greater inclusivity. By embracing ephemeralization, we have the power to uplift communities, bridge disparities, and pave the way for a more equitable and compassionate society.


Conclusion:

In the grand tapestry of human endeavor, ephemeralization emerges as a guiding principle, weaving together our ambition to create better technology and improve life on Earth. With every innovation and groundbreaking discovery, we inch closer to a future where doing more with less becomes a reality. As we unite our efforts, driven by a shared vision, let us embrace the transformative power of ephemeralization and usher in an era of sustainable progress and a brighter future for all.

References:

1. Fuller, R. B. (1938). Nine Chains to the Moon.

2. McHarg, I. (1970). Design with Nature.

3. Hawken, P., Lovins, A., & Lovins, L. H. (1999). Natural Capitalism: Creating the Next Industrial Revolution.



-

"Game Theory and Berge Equilibrium: The Hidden Forces Shaping Our Lives and the Economy"

Game theory and the Berge equilibrium are two concepts that have a significant impact on our lives, the business world, and the economy. Game theory is a mathematical study of decision-making and behavior in strategic situations, while the Berge equilibrium is a concept in game theory that describes the situation where all players in a game make optimal decisions given the decisions of the other players. Together, they help to explain why we make certain choices and how those choices can impact the broader economy.


The idea of game theory has been around since the mid-20th century when it was first introduced by mathematician John Nash. Since then, game theory has been applied to various fields, including economics, political science, and biology. The basic idea behind game theory is that it models decision-making situations as games, where each player is trying to achieve their desired outcome. The players must take into account the actions of the other players and make their decisions accordingly. 


The Berge equilibrium is a specific outcome in game theory where all players in a game are making their optimal decisions given the decisions of the other players. In other words, no player has an incentive to change their strategy, as doing so would result in a worse outcome for them. This equilibrium can be reached through various types of games, such as the prisoner's dilemma or the stag hunt game. 





Game theory and the Berge equilibrium play a crucial role in understanding many aspects of our daily lives. For example, in the job market, we can see the impact of game theory on negotiating salaries. Both employers and job seekers are trying to achieve their desired outcomes, and each party must consider the actions of the other. In this situation, the Berge equilibrium can be reached when both parties agree on a salary that meets their needs. 


The business world is also heavily influenced by game theory and the Berge equilibrium. Companies are constantly competing with one another for market share and profits. They must consider the actions of their competitors when making strategic decisions, such as pricing, marketing, and product development. In this situation, the Berge equilibrium can be reached when companies find a balance where each company's market share and profits are optimized.


In the broader economy, game theory and the Berge equilibrium can help explain the cycles of boom and bust that occur. For example, in the housing market, buyers and sellers are playing a game where each party is trying to achieve their desired outcome. When the market is in equilibrium, buyers and sellers are making their optimal decisions given the decisions of the other party. However, if there is a sudden change in the market, such as an economic downturn or an increase in interest rates, the equilibrium can be disrupted, leading to a bust in the market.


In conclusion, game theory and the Berge equilibrium have a significant impact on our lives, the business world, and the economy. By modeling decision-making situations as games and analyzing the Berge equilibrium, we can better understand why we make certain choices and how those choices can impact the broader economy. As we continue to face new challenges and changes in our world, game theory and the Berge equilibrium will remain essential tools in understanding and predicting human behavior.


References:


1. Binmore, K. (2007). Game theory: A very short introduction. Oxford: Oxford University Press.

2. Fudenberg, D., & Tirole, J. (1991). Game theory. Cambridge, Mass: MIT Press.

3. Osborne, M. J., & Rubinstein, A. (1994). A course in game theory. Cambridge, Mass: MIT Press.

4. Serrano, R. (2008). A short course in intermediate microeconomics with calculus. New York: Cambridge University Press.


Recovery

The path ahead seems steep and long, 

A daunting road to walk alone, 

But with each step, you grow more strong, 

And through the struggles, seeds are sown.

The shadows of the past may loom, 


A weight that's hard to cast aside, 

But with each day, a new day blooms, 

And hope and faith will be your guide.


Though progress may be slow at first, 

Each victory is worth the fight, 

And with each milestone, quench your thirst, 

For peace and joy and renewed light.


So take each day, one step at a time, 

And trust the journey that you're on, 

For with each rise, a new sun will shine, 

And a new day will dawn.

DeepDetector is the new "Hello World"

Deep learning has become an increasingly popular field in recent years, and with it comes the need for powerful tools to create and train deep neural networks. Enter DeepDetector, a deep learning library that is quickly becoming the new "Hello World" for machine learning enthusiasts.

DeepDetector is a Python-based library that allows users to easily create, train, and deploy deep neural networks. With its intuitive API and powerful functionality, DeepDetector has quickly become a go-to choice for both beginner and advanced machine learning practitioners.

One of the key features of DeepDetector is its ease of use. With just a few lines of code, users can create and train a deep neural network, making it an excellent choice for those new to the field. Here's an example of how simple it can be to get started with DeepDetector:


  
from deepdetector import models

model = models.Sequential([
  models.Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)),
  models.MaxPooling2D(pool_size=(2, 2)),
  models.Flatten(),
  models.Dense(10, activation='softmax')
])


model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))

Make sure to replace x_train, y_train, x_test, and y_test with your actual data.

In just a few lines of code, we've created a convolutional neural network, compiled it with an optimizer and loss function, and trained it on our data.

DeepDetector also offers a wide range of pre-trained models that can be easily imported and used for various applications. These pre-trained models are perfect for those looking to quickly add deep learning capabilities to their projects without spending the time and resources needed to train their own models.

In conclusion, DeepDetector has quickly become a popular tool in the world of deep learning, thanks to its ease of use, powerful functionality, and wide range of pre-trained models. Whether you're new to the field or an experienced practitioner, DeepDetector is definitely worth checking out. So why not give it a try and see what you can create?