<?xml version="1.0" encoding="UTF-8" ?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>kaiomagalhaes</title>
  <subtitle>a blog about (mostly) computery things</subtitle>
  <link rel="alternate" type="text/html" href="https://www.kaiomagalhaes.com/"/>
  <link rel="self" type="application/atom+xml" href="https://www.kaiomagalhaes.com/rss"/>
  <id>https://www.kaiomagalhaes.com/rss</id>
  <updated>2025-11-14T00:00:00.000Z</updated>
  <rights>Copyright © 2026, Chris Hewell Garrett</rights>
  
    <entry>
      <title>Hiring in the age of AI</title>
      <id>https://www.kaiomagalhaes.com/blog/Hiring-in-an-age-of-ai</id>
      <published>2025-11-14T00:00:00.000Z</published>
      <updated>2025-11-14T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>I&#39;ve been working remotely at <a href="https://codelitt.com">Codelitt</a> since 2015, and hiring remotely since 2017. In other words, I was recruiting talent remotely long before COVID forced many industries to either go remote or lose their workforce to more flexible companies. Being an early adopter of remote work opened up a huge pool of candidates. Many were proactively stepping out of their comfort zones to try something new. As a Brazilian working with U.S. companies pre-COVID, I often got funny looks when I mentioned I worked remotely, back then it was a novelty.</p>
<p>Between 2017 and 2024, hiring was straightforward enough. As an engineer who&#39;s never been a fan of live coding exercises, I preferred to send candidates a take-home assignment. After they submitted their code, we&#39;d review it together in a follow-up interview, talking through their technical choices and thought process. We also had a culture call to discuss communication (yes, English proficiency mattered), priorities, and alignment with the team. I knew some candidates might have gotten a little help on the take-home, but I liked to give them the benefit of the doubt. Any shortcuts would usually become obvious during the technical interview when we dug into the details. Life was simple.</p>
<p>Don&#39;t get me wrong, I did encounter some creative attempts at gaming the system in those days. For example, I&#39;ve seen candidates try things like:</p>
<ul>
<li><p>Using a second monitor (just off-camera) to search for answers during a live interview.</p>
</li>
<li><p>Having someone else literally join the interview pretending to be them.</p>
</li>
<li><p>Outsourcing their take-home project to another person, then looking utterly lost when asked to explain the code.</p>
</li>
<li><p>Reading off a prepared script of Q&amp;A during the interview (one candidate accidentally left his cheat sheet visible on screen; needless to say, it didn&#39;t end well).</p>
</li>
</ul>
<p>Those incidents were rare, and frankly a bit amusing in hindsight. However, hiring in the age of AI is a whole new nightmare.</p>
<p>It&#39;s become challenging for both the interviewer and the interviewee. Let&#39;s look at both perspectives.</p>
<h2>Candidates</h2>
<p>From a candidate&#39;s perspective, the interview process used to be refreshingly human. A couple of culture chats and a couple of technical interviews, that was it. Sometimes there would be a live coding exercise or a take-home project. I never minded either approach, since I never had trouble with them. (I do realize that if you&#39;re especially shy or anxious, a live coding session can be nerve-wracking. But for me, it was fine.)</p>
<p>Now, things have changed dramatically for candidates looking for jobs. Just a few weeks ago, I had two friends in my home office applying for new positions, and I was blown away by the hoops companies made them jump through. The process was loaded with AI-driven steps. For instance, some companies now:</p>
<ul>
<li><p>Have you do a phone screening with an AI (basically talking to a chatbot that asks interview questions).</p>
</li>
<li><p>Require a live coding exercise monitored by AI, where software watches your screen and behavior for any signs of cheating.</p>
</li>
</ul>
<p>In these initial stages, it was all AI ,  no human being on the other side. From the company&#39;s standpoint, they have nothing to lose by automating this: it saves their team&#39;s time and they can churn through candidates easily. But for the candidates, it felt cold and bizarre. Imagine trying to showcase your passion and skills to a machine that&#39;s just parsing keywords or tracking your keystrokes. There&#39;s zero personal connection. It&#39;s like yelling into the void, hoping the void likes your answers.</p>
<h2>Companies</h2>
<p>Now put yourself in the company&#39;s shoes. As an interviewer or hiring manager, it&#39;s virtually impossible to tell if a candidate is getting a little AI assistance during a remote interview. Thanks to modern tools, a determined candidate can have an AI sidekick without you ever knowing. They might:</p>
<ul>
<li><p>Run an undetectable AI program locally that listens in and feeds them suggested answers (in a hidden window that isn&#39;t captured on the screen share).</p>
</li>
<li><p>Prop a phone or tablet just out of view, using it to quickly query <a href="https://openai.com/chatgpt">ChatGPT</a> or another assistant for help mid-interview.</p>
</li>
</ul>
<p>With these tricks, a candidate could theoretically answer even complex questions by relaying what the AI suggests, in real time. As an interviewer, you&#39;re left wondering if the pause they took is because they were thinking or because they were waiting for <a href="https://openai.com/chatgpt">ChatGPT</a> to respond. Verifying someone&#39;s true skill and honesty has become an arms race.</p>
<h3>The rise of the AI fakes</h3>
<p>I believe that the wave of fake candidates started appearing in 2020 with the surge in remote positions. I remember clearly facing this problem in 2022 where I&#39;m confident that I interview one person and hired another. The way for it to happen before was a simple one. People would ask someone else to do the interview, and they would never appear on camera. However, nowadays they are taking one step further. In a recent hiring process for a Python position at <a href="https://codelitt.com">Codelitt</a>, we started facing candidates that would be using AI filters to look like someone else. This is happening in two ways:</p>
<p>A candidate from a different nationality wants to pass as an American one when the company is only hiring in America due to timezone constraints.
When a candidate doesn&#39;t have the necessary experience for a position, instead of creating a fake profile, they instead use someone&#39;s real profile and use AI to impersonate them on calls.</p>
<p>I&#39;ve personally seen both situations happening at <a href="https://codelitt.com">Codelitt</a>. This means that companies need to step up their game in order to not be fooled.</p>
<p>In fact, some large companies have reportedly resorted to bringing back at least one in-person interview in the process, just to ensure the person they&#39;re talking to is the one doing the thinking. (As <a href="https://www.wsj.com/lifestyle/careers/ai-job-interview-virtual-in-person-305f9fd0">The Wall Street Journal</a> noted, AI is “forcing the return of the in-person job interview” at firms like <a href="https://www.cisco.com">Cisco</a> and <a href="https://www.mckinsey.com">McKinsey</a>. See also <a href="https://www.theatlantic.com/technology/2025/10/ai-cheating-job-interviews-fraud/684568/">The Atlantic</a>. That&#39;s a heavy price to pay for something as basic as authenticity. For smaller companies or teams hiring remotely around the world, mandating in-person meetings isn&#39;t always practical. So, many are stuck with a dilemma: <em>How do you keep your hiring both fair and effective in this AI-saturated reality?</em></p>
<h2>Final thoughts</h2>
<p>The rise of AI has fundamentally changed the hiring landscape. As an interviewer, I find myself double-checking every unusual hesitation or overly polished answer, wondering if there&#39;s an AI whispering in the candidate&#39;s ear. As a candidate, you might be dealing with faceless AI evaluators that make the process feel soulless. Both sides are adjusting, for better or worse.</p>
<p>At the end of the day, hiring is still about people. Trust and integrity have never been more important. Companies will need to innovate their interview techniques to winnow out AI-assisted pretenders (or decide that using AI is an acceptable skill in itself), and candidates will need to adapt to more rigorous authenticity checks. We&#39;re all navigating this new normal. My hope is that we can strike a balance, leveraging AI where it helps, but keeping the humanity in hiring. Because no matter how smart the machines get, it&#39;s people who build companies. And it&#39;s people we ultimately want to work with.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>AI First Teams</title>
      <id>https://www.kaiomagalhaes.com/blog/AI-First-Teams</id>
      <published>2025-10-02T00:00:00.000Z</published>
      <updated>2025-10-02T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Over the last few months, the idea of &quot;AI-first teams&quot; has been gaining traction. As I dug deeper into the concept, it struck me that since the earliest releases of AI agents, I had already been structuring <a href="https://codelitt.com/">Codelitt’s</a> internal work in ways that naturally align with this philosophy.</p>
<p>In this piece, I’ll share how we’ve embraced this approach, why it matters, and how any software company, regardless of size or maturity, can apply it to accelerate innovation and execution.</p>
<h3>Where and how did it begin for me?</h3>
<p>With the release of tools like <a href="https://cursor.sh">Cursor</a>, <a href="https://lovable.ai">Lovable</a>, <a href="https://v0.dev">V0</a>, and others, I began noticing something remarkable: people from every discipline were suddenly empowered to prototype their own ideas, whether or not they had any programming experience. This democratization of creation signaled that software was no longer the exclusive domain of engineers, but a playground for anyone with imagination.</p>
<p>One of the most striking examples in my circle came from a heart surgeon. He showed me a product he had built on <a href="https://lovable.ai">Lovable</a> to streamline requests for doctors’ availability at one of the hospitals where he worked. If a surgeon with no formal software background could design and deploy a functional tool, then the rules of who gets to innovate had clearly changed. The barrier to entry had collapsed, and that meant the dynamics inside companies would soon follow.</p>
<p>At the time, many engineers I spoke with were skeptical. The code generated by these tools wasn’t production-ready (spoiler: it still isn’t). But what mattered wasn’t perfection, it was acceleration. And as the tools matured, I saw even experienced engineers adopting them as idea validators, feasibility checkers, and code accelerators. That shift showed me the future wasn’t about replacement, it was about leverage.</p>
<p>At <a href="https://codelitt.com/">Codelitt</a>, we’ve always defined ourselves as builders. My conversations with CEOs, product managers, and entrepreneurs revolve around how to turn vision into reality. Soon, I was using these tools myself: spinning up new projects, stress-testing features, salvaging what worked, and discarding the rest. What once required teams and budgets now required only curiosity and a willingness to experiment. The economics of innovation had changed.</p>
<p>This shift transformed our enterprise work as well. I started creating new functionalities without seeking funds or teams, because the cost of experimentation had become almost zero. From there, I interviewed users to uncover which ideas had real traction. That process was the real eye-opener: it wasn’t just about building faster, it was about discovering sooner. For me, that was the moment “AI-first teams” became less of a trend and more of a blueprint for the future.</p>
<h3>What is an AI first team?</h3>
<p>An AI-first team is one that begins every task with a simple expectation: AI will handle the bulk of the work. Instead of treating AI as an optional add-on, these teams design their workflows so that tasks can be delegated to AI agents first, leaving the human team to focus on reviewing, refining, and guiding the results. The mindset is not about replacing people, but about reframing what human effort is worth spending on.</p>
<p>This approach fundamentally changes the rhythm of work. By leveraging AI as the first draft generator, an AI-first team can shorten feedback loops, test multiple solutions in parallel, and iterate in serial at a fraction of the time. They are not bound by the traditional constraints of bandwidth, resourcing, or even specialized expertise. Instead, they move with the assumption that the bottleneck is no longer execution, it’s judgment.</p>
<p>The takeaway is clear: the defining strength of an AI-first team is not speed alone, but the ability to explore more possibilities in less time. And in a world where innovation is as much about discovering the right solution as building it, that advantage compounds quickly.</p>
<h3>What is the goal of an AI first team?</h3>
<p>The primary goal of an AI-first team is experimentation. Rather than committing early to a single path, these teams explore a wide range of possible solutions, quickly generating and testing variations until they find one that delivers the right balance of value and feasibility. In this sense, the process isn’t fundamentally different from conventional development, it still requires iteration, validation, and refinement.</p>
<p>What changes is the scale and speed of exploration. AI-first teams can afford to try more ideas, faster, because the cost of each experiment is drastically reduced. However, this approach is not universal. There are scenarios where applying AI-first methods introduces drawbacks, such as complexity, technical debt, or risks in critical systems, that can outweigh the benefits. Knowing when not to apply AI-first thinking is just as important as knowing when to lean into it.</p>
<p>The real lesson is this: the goal of an AI-first team is not to replace traditional development, but to extend its reach. By making experimentation cheaper and more accessible, these teams shift the focus from whether something can be built to which of many possible solutions is worth building.</p>
<h3>When to build an AI First Team and where to deploy it?</h3>
<p>In my experience, projects with extremely high code complexity are not yet a good fit for AI-first teams. Current AI models still struggle with the sheer amount of context required in these environments, making them more prone to hallucinations, brittle solutions, and unintended side effects. In such cases, the cost of error often outweighs the speed of exploration.</p>
<p>The strongest use cases for AI-first teams today are in new initiatives where experimentation is the priority, or in projects with relatively low code complexity. In these contexts, AI can accelerate the creation of first versions, generate multiple approaches, and help teams move from concept to prototype at an unprecedented pace. Through building projects from scratch with AI, I’ve consistently found that the earliest functionalities are easy to implement, but as the system grows, the models often start breaking or rewriting older code. This reinforces a clear pattern: today’s AI is best suited for exploration, not maintenance.</p>
<p>The takeaway is important for leaders: AI-first teams thrive at the frontiers of innovation, where the goal is discovery and validation. They are less effective in highly complex, long-lived systems where stability and continuity matter most. In other words, they are not a wholesale replacement for traditional teams, but a powerful complement when speed, experimentation, and optionality matter more than perfect reliability.</p>
<h3>What is the composition of an AI first team?</h3>
<p>The short answer is that we don’t yet have enough data to define a single “ideal” AI-first team structure. What we do know, however, is the set of roles and resources that are essential for these teams to function.</p>
<p>It’s important to note that two AI-first teams can have the same composition on paper and still deliver completely different outcomes. The differentiator is not who sits on the team, but the degree of freedom they have to experiment, combined with the clarity of the goals they’re working toward.</p>
<p><img src="/assets/blog/ai-teams/ai_team_development_cycle.png" alt="AI team process" title="AI team process"></p>
<h4>1. Product Manager</h4>
<p>Every AI-first team begins with clarity of purpose. The Product Manager is responsible for defining what success looks like, what the team is aiming for in each iteration, and how progress will be measured. In this model, vague ambition is dangerous. Clear, sharp goals are the safeguard against wasted effort. A Product Manager who fails to define these boundaries sets the team up to chase everything, and ultimately achieve nothing.</p>
<h4>2. Engineer</h4>
<p>The Engineer is the builder of first versions. Their role is to take the existing codebase (if there is one), explore multiple approaches, and use AI as a force multiplier to rapidly generate prototypes. Once they land on something that resembles a solution, it’s presented to the Product Manager for validation. Only after that checkpoint does the design layer come into play.</p>
<p>In AI-first teams, the Engineer’s value lies less in perfecting code and more in orchestrating experiments that create viable starting points.</p>
<p>More often than not, the Engineer will be the one proposing the first UI options through the usage of models.</p>
<h4>3. Designer</h4>
<p>Here is where direction meets experience. An AI-first team needs a clear vision to guide experimentation, but before anything goes to production, especially in more stable products, the Designer must step in. Models can suggest layouts or even generate compelling page designs, but user flow and overall experience require human judgment. That’s where the Designer becomes critical.</p>
<p>Traditionally, designers shaped the user experience from the very beginning. In an AI-first team, the order flips. Instead of defining everything upfront, the Designer evaluates what the Engineer and AI have produced, then realigns it. Sometimes this means small refinements; other times, it requires breaking everything and starting again with sharper guidance.</p>
<p>Designers in AI-first teams are not blueprint makers, they are course correctors. Their role ensures that what AI helps create is not only fast, but also right: not just functional, but usable and genuinely valuable.</p>
<h2>What is the Best Environment for an AI-First Team?</h2>
<p>AI-first teams don’t thrive everywhere. Their effectiveness depends heavily on the environment they operate in. From my experience, the most productive conditions for an AFT include three key elements: simplicity of codebase, space for iteration, and clarity of goals.  </p>
<p><strong>1. A Simple or Non-Existent Codebase</strong><br>The smaller and less interconnected the codebase, the better. Current AI models remain limited in terms of context, they struggle when required to handle large systems with many dependencies. In projects that span multiple repositories, the cognitive load for AI grows exponentially, creating more opportunities for errors and regressions. Developers can and must course-correct, but the overhead quickly dilutes the speed advantage.  </p>
<p>AI-first teams excel in greenfield projects or lightweight systems, where complexity doesn’t choke the AI’s ability to generate useful contributions.  </p>
<p><strong>2. Room for Iteration</strong><br>The first solution generated by an AI-first team is rarely the one that makes it to production, and that’s by design. The real strength of this model lies in rapid iteration: define a goal, generate a solution, evaluate it, gather feedback, and try again. This cycle needs to be embraced, not resisted. Leaders must understand that early outputs are drafts, not final products.  </p>
<p><strong>3. Clear Goals, Not Prescribed Paths</strong><br>AI-first teams need guidance, not micromanagement. Success comes from leaders setting measurable, outcome-driven goals, <em>what</em> to achieve, not <em>how</em> to achieve it. Without this clarity, experiments risk becoming unfocused, and any result may seem acceptable. Precision in defining goals keeps the team aligned while still giving them the freedom to explore.  </p>
<h2>Worst Environment for an AI-First Team</h2>
<p>Not every context is suitable for an AI-first approach. In fact, there are environments where the drawbacks outweigh the benefits, and forcing an AFT into them can slow progress rather than accelerate it.  </p>
<p><strong>1. Massive, High-Debt Codebases</strong><br>When the codebase is enormous or burdened with significant technical debt, the AI struggles to navigate. The complexity of patterns, inconsistencies, and legacy decisions makes it hard for both humans and AI models to understand what goes where. In such cases, AI-generated code often introduces more risk than value, compounding the very problems teams are trying to solve.  </p>
<p><strong>2. Big amount of microservices and interconnected tools</strong><br>When the codebase is composed by many microservices and interconnected tools, the AI struggles to understand the context and the dependencies between them. This makes it hard for the AI to generate useful contributions, and hard to make the changes in all the necessary places.</p>
<p><strong>3. High amount of dependencies between codebases</strong><br>When the codebase is composed by many dependencies between codebases, the AI struggles to understand the context and the dependencies between them. This makes it hard for the AI to generate useful contributions.</p>
<h3>Final thoughts</h3>
<p>Moving forward, I intend to use this approach in every project where the goal is to experiment and find the right solution. This approach is by no means a silver bullet, and in my opinion, AI should be used as yet another tool, and not like a silver bullet.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Choosing the Right Software in an Era of Vaporware</title>
      <id>https://www.kaiomagalhaes.com/blog/choosing-the-right-software-in-an-era-of-vaporware</id>
      <published>2025-05-16T00:00:00.000Z</published>
      <updated>2025-05-16T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Working in the custom-software industry often means comparing what we can build for our customers with what is already being sold as SaaS. That involves combing through documentation, marketing sites, and countless “we-can-do-everything” sales pitches, many of which turn out to be empty promises. A common culprit is the disconnect between product and sales, where <strong>everything</strong> is allegedly possible, but there is always an asterisk lurking at the end.</p>
<hr>
<h2>The StorageInc * - A Case Study</h2>
<p>A few years ago, a company (let’s call them <strong>Acme</strong>) signed a contract with a file-storage service we’ll call <strong>StorageInc</strong>. Acme needed to store <em>terabytes</em> of files and make them searchable so users could review documents as learning materials. StorageInc’s sales team promised it would be <em>simple</em>: store your files with us, send an API request with a search term, and you’re done, far easier than any alternative on the market.</p>
<p>This functionality was crucial for Acme. Because of the large file sizes, stringent security requirements, and a separate set of features StorageInc offered, Acme agreed to a multimillion-dollar contract. Only later, during implementation discussions, did Acme learn what the asterisk actually meant: <strong>the search indexed only the first <em>x</em> pages of each file</strong>. With documents hundreds of pages long, that limitation was a deal-breaker.</p>
<p>Acme swore this detail had never been disclosed by the sales team. Whether sales knew about it remains unclear; it emerged only when Acme’s development team spoke directly with StorageInc’s implementation engineers. There were other gaps between what was promised and what was delivered, but this single limitation created enough friction to render the partnership useless.</p>
<p>The result? A legal battle to break the contract, because continuing with StorageInc suddenly looked costlier than building the service in-house.</p>
<hr>
<h2>What This Teaches Us</h2>
<p>Situations like this arise when you choose an off-the-shelf solution instead of building your own. Often, SaaS products are “good enough” and truly solve your problem, but differentiating the real solutions from the vaporware takes time and diligence.</p>
<p>I’ve seen similar issues many times since that first incident, and, frankly, I expect to see them more frequently. As more companies build features with AI agents like <a href="https://lovable.ai">Lovable</a>, <a href="https://platform.openai.com/docs/guides/codex">OpenAI Codex</a> and <a href="https://cursor.sh">Cursor</a>, I hear sales teams claiming <em>anything</em> is possible. I see half-finished functionalities because developers don’t fully understand what they’re building. Security failures are on the rise.</p>
<p>While vaporware has always existed, today it feels like <strong>most</strong> of what’s out there is vaporware. Our filters must get stricter.</p>
<hr>
<h2>AI Should Augment, Not Replace, Developers</h2>
<p>The productivity gains we get from AI should come from <strong>augmenting</strong> developers, not directly replacing them. These tools can write short snippets of code impressively well, but they struggle with complex architectures. The real time sinks were never the coding lines themselves; they were the planning, architecting, and testing phases.</p>
<p>I can now build an impressive proof of concept (POC) in a couple of days, work that once took weeks. Yet it is still a <em>POC</em>. It still needs thorough review and safeguards. User flows must be well-defined, and the UI must provide clear cues for a good experience.</p>
<p>In short, it has never been easier to sell features through a polished UI that don’t really exist.</p>
<hr>
<h2>Due Diligence Checklist</h2>
<p>Before you add your credit card details, be sure to:</p>
<ol>
<li><strong>Understand the product’s core proposition.</strong>  </li>
<li><strong>Review the feature set carefully.</strong>  </li>
<li><strong>Talk to a salesperson and ask clear yes/no questions.</strong> Avoid “it depends.”  </li>
<li><strong>Request a trial</strong> whenever possible.  </li>
<li><strong>Never commit</strong> to more than a few hundred dollars without proper due diligence.</li>
</ol>
<hr>
<p>Choosing the right software in an era of vaporware is less about dazzling demos and more about ruthless verification. Keep your guard up, dig for the *, and don’t let the promise of “AI magic” cloud your judgment.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Trying out AI-First IDEs: Cursor, Windsurf, Zed, Aide, Copilot, and Cody</title>
      <id>https://www.kaiomagalhaes.com/blog/personal-experience-with-ai-first-ides</id>
      <published>2025-03-25T00:00:00.000Z</published>
      <updated>2025-03-25T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <script>
  import BlogImage from '$lib/components/BlogImage.svelte';
</script>

<p>Over the last couple of weeks I&#39;ve been tasked to review the AI-first IDEs available now in the first quarter of 2025.
The goal is to define which ones I&#39;ll keep using on a daily basis while I work across different projects/technologies.
The IDEs I&#39;ll be focusing on are: <strong>Cursor</strong>, <strong>Windsurf</strong>, <strong>Zed</strong>, <strong>Aide</strong>, <strong>GitHub Copilot</strong>, and <strong>Sourcegraph Cody</strong>.</p>
<p>Before I jump into the evaluation itself, I would like to mention how Microsoft changed the entire game by offering an Open Source IDE. Most of the tools below are based on it, and IDE wise provide a similar experience.</p>
<h2>Evaluation Criteria</h2>
<p>I&#39;m evaluating each tool based on the following criteria and scored from 1 to 10:</p>
<ul>
<li><strong>Pricing</strong>: Affordability and available plans</li>
<li><strong>Onboarding</strong>: Ease of getting started</li>
<li><strong>UI/UX</strong>: Intuitiveness and user experience</li>
<li><strong>Transparency</strong>: Clarity on changes and suggestions</li>
<li><strong>Efficiency</strong>: Speed and productivity impact</li>
<li><strong>Code Quality</strong>: Accuracy and best practices</li>
<li><strong>Context Management</strong>: Handling large codebases and context</li>
<li><strong>Privacy</strong>: Data safety and user control</li>
<li><strong>Features</strong>: Breadth and uniqueness of functionalities</li>
</ul>
<hr>
<h2>Cursor: AI Pair Programming On Steroids</h2>
<p><strong>Cursor</strong> feels like an upgraded VS Code—familiar yet powerful. Features like <strong>Agent Mode</strong>, <strong>Composer</strong>, and the experimental <strong>Bug Finder</strong> allow complex multi-file refactoring with natural language descriptions. Context-aware AI suggestions are transparent and easy to review.</p>
<h3>AI Models Available</h3>
<ul>
<li><strong>OpenAI&#39;s GPT-4</strong></li>
<li><strong>Anthropic&#39;s Claude 3.5/3.7</strong></li>
<li><strong>Google&#39;s Gemini 2.0</strong></li>
<li><strong>xAI&#39;s Grok</strong></li>
<li><strong>Cursor&#39;s own cursor-small model</strong> for basic code completion</li>
</ul>
<h3>Problems Encountered</h3>
<ul>
<li>Context limits became noticeable in large projects, requiring more review cycles.</li>
<li>The <strong>Bug Finder</strong> tool, although promising, sometimes provides false positives or irrelevant suggestions when dealing with highly modular codebases.</li>
<li>Adding new files to the context can be cumbersome, and files tend to disappear on every request, forcing users to re-add them manually.</li>
</ul>
<h3>Pricing</h3>
<ul>
<li><strong>Free tier</strong> available  </li>
<li><strong>Pro Plan:</strong> $20/month (advanced LLM integrations like GPT-4, Claude 3.5/3.7)  </li>
<li><strong>Enterprise:</strong> Custom pricing available</li>
</ul>
<BlogImage src="/assets/blog/personal-ides/cursor_scores_horizontal_modern.png" alt="Cursor IDE Scores Visualization" />

<hr>
<h2>Windsurf: Codeium&#39;s Autonomous Coder</h2>
<p><strong>Windsurf</strong> offers more autonomy with its agent-driven <strong>Cascade AI</strong>. It proactively suggests and even applies multi-step changes across multiple files. It&#39;s highly effective at automating tedious tasks, but the degree of autonomy can sometimes feel too bold—less aligned with a cautious, review-first mindset.</p>
<h3>AI Models Available</h3>
<ul>
<li><strong>Codeium&#39;s Cascade Base model</strong></li>
<li><strong>OpenAI&#39;s GPT-4</strong></li>
<li><strong>Anthropic&#39;s Claude 3.5/3.7</strong></li>
<li><strong>Google&#39;s Gemini 2.0</strong></li>
</ul>
<h3>Problems Encountered</h3>
<ul>
<li>Over-aggressive suggestions required frequent intervention, particularly when refactoring complex logic.</li>
<li><strong>Cascade AI</strong> can sometimes break existing functionality when making changes, making it risky for large and interconnected codebases.</li>
</ul>
<h3>Pricing</h3>
<ul>
<li><strong>Free tier:</strong> Unlimited basic AI features  </li>
<li><strong>Pro subscription:</strong> $15/month (premium LLM access like Claude 3.5)  </li>
<li><strong>Enterprise plans</strong> available</li>
</ul>
<BlogImage src="/assets/blog/personal-ides/windsurf_scores_horizontal_modern.png" alt="Windsurf IDE Scores Visualization" />

<hr>
<h2>Zed: Collaboration Meets AI Speed</h2>
<p><strong>Zed</strong> emphasizes real-time collaborative coding combined with powerful inline AI editing. It&#39;s extremely fast and provides transparent control over AI interactions, ideal for teams that frequently collaborate on the same files simultaneously. Its UI is polished, though slightly different from VS Code.</p>
<h3>AI Models Available</h3>
<ul>
<li><strong>Anthropic&#39;s Claude 3.5 Sonnet</strong> (default)</li>
<li><strong>OpenAI&#39;s GPT-4</strong></li>
<li><strong>Google&#39;s PaLM/Gemini</strong></li>
<li><strong>Local models via Ollama and LM Studio</strong></li>
</ul>
<h3>Problems Encountered</h3>
<ul>
<li>Slight learning curve due to unfamiliar UI.</li>
<li>While collaboration works seamlessly in small teams, scaling up to larger groups occasionally results in synchronization delays or conflicts, especially in fast-paced editing scenarios.</li>
</ul>
<h3>Pricing</h3>
<ul>
<li><strong>Free:</strong> Collaborative editing and basic AI  </li>
<li><strong>Zed Pro:</strong> $12/month per user (includes Anthropic&#39;s Claude 3.5)  </li>
<li><strong>Team and Enterprise plans</strong> available</li>
</ul>
<BlogImage src="/assets/blog/personal-ides/zed_scores_horizontal_modern.png" alt="Zed IDE Scores Visualization" />

<hr>
<h2>Aide: Open-Source AI for Privacy-Minded Developers</h2>
<p><strong>Aide</strong> is fully open-source, allowing integration with various local or remote LLMs. It proactively aids debugging, provides inline transformations, and has a thoughtful rollback mechanism to quickly revert unwanted AI edits. It&#39;s great for developers prioritizing privacy and extensive customization but slightly less polished compared to commercial alternatives.</p>
<h3>AI Models Available</h3>
<ul>
<li><strong>OpenAI&#39;s GPT-4</strong></li>
<li><strong>Anthropic&#39;s Claude</strong></li>
<li><strong>Local models via Ollama and LM Studio</strong></li>
</ul>
<h3>Problems Encountered</h3>
<ul>
<li>UX is less intuitive, and initial configuration can be complex.</li>
<li>Limited support compared to commercial alternatives.</li>
</ul>
<h3>Pricing</h3>
<ul>
<li><strong>Completely free</strong> (open-source)  </li>
<li>Cost depends on chosen external LLM APIs or local models</li>
</ul>
<BlogImage src="/assets/blog/personal-ides/aide_scores_horizontal_modern.png" alt="Aide IDE Scores Visualization" />

<hr>
<h2>Copilot: Effortless and Popular AI Coding</h2>
<p><strong>GitHub Copilot</strong> feels intuitive right from the start. The inline suggestions powered by GPT-4 integrate seamlessly into VS Code and make general coding tasks noticeably faster. It&#39;s excellent for quickly generating boilerplate code or exploring unfamiliar APIs.</p>
<h3>AI Models Available</h3>
<ul>
<li><strong>OpenAI&#39;s GPT-4</strong> (for chat)</li>
<li><strong>GitHub Copilot</strong> (custom model for inline completions)</li>
</ul>
<h3>Problems Encountered</h3>
<ul>
<li>Limited context awareness in large projects.</li>
<li>It occasionally makes inaccurate assumptions about variable naming and scope, leading to suggestions that may not align with the current code style or logic.</li>
<li>Less effective for multi-file operations compared to dedicated AI-first IDEs.</li>
</ul>
<h3>Pricing</h3>
<ul>
<li><strong>Free:</strong> Limited to students and open-source maintainers</li>
<li><strong>Individual:</strong> $10/month</li>
<li><strong>Business:</strong> $19/user/month (includes advanced policy controls)</li>
</ul>
<BlogImage src="/assets/blog/personal-ides/copilot_scores_horizontal_modern.png" alt="GitHub Copilot Scores Visualization" />

<hr>
<h2>Cody: Ideal for Navigating Large Codebases</h2>
<p><strong>Sourcegraph Cody</strong> is designed to help developers navigate large codebases and understand complex code. It provides context-aware suggestions and can be used in various IDEs.</p>
<h3>AI Models Available</h3>
<ul>
<li><strong>Sourcegraph Cody</strong> (custom model for inline completions)</li>
</ul>
<h3>Problems Encountered</h3>
<ul>
<li>Limited context awareness in large projects.</li>
<li>It occasionally makes inaccurate assumptions about variable naming and scope, leading to suggestions that may not align with the current code style or logic.</li>
<li>Less effective for multi-file operations compared to dedicated AI-first IDEs.</li>
</ul>
<h3>Pricing</h3>
<ul>
<li><strong>Free:</strong> Limited to students and open-source maintainers</li>
<li><strong>Individual:</strong> $10/month or $100/year</li>
<li><strong>Business:</strong> $19/user/month (includes advanced policy controls)</li>
</ul>
<BlogImage src="/assets/blog/personal-ides/cody_scores_horizontal_modern.png" alt="Sourcegraph Cody Scores Visualization" />

<hr>
<h2>Final Thoughts: Picking the Right AI-First IDE</h2>
<p>Here&#39;s a quick recommendation based on workflow priorities:</p>
<ul>
<li><strong>Cursor:</strong> Best for VS Code familiarity and strong control over AI changes.  </li>
<li><strong>Windsurf:</strong> Ideal for autonomous code management with oversight.  </li>
<li><strong>Zed:</strong> Great for real-time collaborative development.  </li>
<li><strong>Aide:</strong> Suitable for privacy-conscious developers who want open-source flexibility.  </li>
<li><strong>Copilot:</strong> Excellent for general-purpose coding and boilerplate generation.  </li>
<li><strong>Cody:</strong> Ideal for large codebases requiring deep contextual understanding.</li>
</ul>
<h2>Overall Comparison</h2>
<p>After thorough testing and scoring, <strong>Cursor</strong> remains my top choice. Its robust transparency, feature set, and excellent UI align perfectly with my &quot;trust but verify&quot; approach.</p>
<p>Choosing the right tool ultimately depends on your specific use case and preferences. I hope this breakdown helps you make an informed decision.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Defining our priorities as the CTO of a tech company</title>
      <id>https://www.kaiomagalhaes.com/blog/Defining-priorities-as-a-cto-of-a-tech-company</id>
      <published>2025-02-26T00:00:00.000Z</published>
      <updated>2025-02-26T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Over the last few weeks, I&#39;ve been feeling increasingly overwhelmed by the number of tasks I&#39;m responsible for. As the CTO of a services company, I&#39;m involved in nearly 80% of all discussions that are not project-specific (and often in those as well). This means that deciding what to focus on at the start of my day is often a struggle. </p>
<p>This morning, before doing anything else, I decided to define my concept of priority. To my surprise, I was able to come up with a definition that encompasses everything I find valuable for both the company and the team.</p>
<p>Before diving into that, I need to define the <strong>North Star</strong>. Companies differ in processes, people, and culture, but one characteristic most of them share is their main goal: <em>to make money</em>. However, there are values they might be willing to sacrifice that I am not. The quality of life—both for me and my team—is a good example. </p>
<p>Therefore, the <strong>North Star</strong> for my definition of priority is: </p>
<blockquote>
<p>A task that increases either company revenue or the quality of life for my team, where one must not contradict the other.</p>
</blockquote>
<p>To help me decide what I should work on each day, I came up with two tools. </p>
<h3>1. A High-Level Definition</h3>
<p>This helps me quickly regain perspective when I find myself with tunnel vision, focusing on a task purely for personal satisfaction. I ask myself if the task falls into one of these categories:</p>
<ul>
<li><strong>Stability</strong>  <ul>
<li>The task maintains or improves the company’s current revenue.</li>
</ul>
</li>
<li><strong>Quality of life</strong>  <ul>
<li>The task enhances the team&#39;s ability or happiness.</li>
</ul>
</li>
<li><strong>Growth</strong>  <ul>
<li>The task supports revenue growth.</li>
</ul>
</li>
</ul>
<p>If the answer is an easy &quot;no&quot; for all of the above, it means the task has no place in my workday. It likely doesn’t fall within my responsibilities, so I either delegate it or drop it entirely.  </p>
<p>If the answer is &quot;yes,&quot; I move forward with it. However, <em>every single day</em>, I have more tasks than I can accomplish. To prioritize effectively, I created the following <strong>scale of priorities</strong>:  </p>
<h3>2. The Scale of Priorities</h3>
<ol>
<li><p><strong>Will it increase the company’s revenue?</strong>  </p>
<ol>
<li>It will bring in new contracts.  </li>
<li>It will increase the size of existing contracts.  </li>
<li>It will strengthen our ability to execute current contracts.</li>
</ol>
</li>
<li><p><strong>Will it enhance the team’s ability to deliver quality work?</strong>  </p>
<ol>
<li>It will improve the team’s technical skills.  </li>
<li>It will boost team morale and work satisfaction.</li>
</ol>
</li>
<li><p><strong>Will it increase our chances of future success?</strong>  </p>
<ol>
<li>It will improve the likelihood of securing larger or more contracts in the future.  </li>
<li>It will strengthen our ability to handle contract expansion.</li>
</ol>
</li>
</ol>
<p>By following this approach, I can easily categorize each task and act accordingly!</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Value engineering as a software methodology</title>
      <id>https://www.kaiomagalhaes.com/blog/value-engineering-as-a-software-methodology</id>
      <published>2024-07-25T00:00:00.000Z</published>
      <updated>2024-07-25T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Over my career, I&#39;ve seen many attempts to answer the question, &quot;What is the best way to build software&quot;. After being in the industry for over a decade, I concluded that any viable answer would need to consider at least the following factors:</p>
<ul>
<li>Financial resources</li>
<li>Desired timeline</li>
<li>Roadmap maturity, i.e., the amount of product information ahead of the design and development phases</li>
</ul>
<p>Given the numerous variables involved, I decided to focus on a more practical and applicable question:</p>
<blockquote>
<p>&quot;What is one engineering method that will give me the best ROI in most situations?&quot;</p>
</blockquote>
<p>Whenever I saw myself talking to a CTO friend of mine, the same sentence kept coming up:</p>
<blockquote>
<p>&quot;As an engineer, we need to provide value&quot;</p>
</blockquote>
<p>We would say it as a critique of the practices that keep showing up in software engineering circles. Every day, we release new tools for particular problems. Yet, we see the community picking them up to solve challenges they don&#39;t have. In parallel to these conversations, I started working as the DevOps engineer of a 12-year-old product. While going through the onboarding I discovered that the total downtime in the entire product&#39;s history was only 3 hours. The strategy for that? A single Linux server with Apache running a PHP Application. Oh, and let&#39;s not forget a cron job to clean up the logs. That&#39;s it—simple and effective.</p>
<p>I am not here to advocate doing the bare minimum on a product, but to raise a question about what we really need in our applications. Upon looking at these two situations together I started noticing the amount of decisions we make daily to avoid future problems we won&#39;t have. Too often we over-engineer our solutions because we once upon a time needed &quot;it&quot; on a project.</p>
<p>Let&#39;s talk about value engineering as a guiding method for software building.</p>
<p>When I began writing this piece, I researched &quot;Value Engineering&quot;. I discovered that the term was first coined in 1947 by <a href="https://en.wikipedia.org/wiki/Value_engineering">Lawrence D. Miles at General Electric</a>. Miles aimed to find cost-effective ways to achieve functions without compromising quality. We define Value in our context as a function of cost. Thus, we can increase value either by adding functions or decreasing costs. </p>
<p>Before we get too deep into a technical conversation, let&#39;s talk about the meaning of <em>value</em>. At its core, value means money.  This money can come in the form of an increase in revenue or a decrease in costs. In SaaS products, we also add value through functionalities by increasing the product&#39;s stickiness. The longer a user sticks to a SaaS, the more revenue they generate. In summary, creating value to a customer though software means providing benefits that exceed the costs, wether via direct or indirect financial gains.</p>
<h3>Principles of Value Engineering</h3>
<h4>a) Every logic implementation needs to add value to the user</h4>
<p>In 2003 we had a <a href="https://www.researchgate.net/publication/220309761_Reading_Writing_and_Code">study</a> by Diomidis Spinellis showing that developers spend most of their time reading code. That means that the less code we have, the better. Another way of seeing it is that every line of code needs to add value to the software we are building. Whenever an engineer reads through a function that doesn&#39;t add value, it is time we are losing. I&#39;ll be the first to say that I&#39;ve been one big offender of this rule. Over the years I&#39;ve seen pieces of logic that we would add &quot;in case we need it in the future&quot;.  The reality is that more often than not it never came to be necessary. Unfortunately,  it would still take time for another developer to understand it. Thus, we need to make sure we are always tying implementations to real-world requirements.</p>
<h4>b) Do not use integrations or extra tools unless you need them</h4>
<p>This is one of my biggest concerns around the amount of tooling available. Because we have many options, it is easy to go for what is most talked about in our networks. One example is engineers proposing <a href="https://graphql.org/">GraphQL</a> because <a href="https://github.com/">Github</a> uses it. However, because one technology works for a big company, it doesn&#39;t mean it works for our problem at hand. </p>
<p>One problem with adding integrations and tooling is maintenance. For every piece you add, the highest the chance of someone on your team not having the necessary experience to maintain it.</p>
<p>A rule of thumb is: do not add a new technology or third-party integration unless you have no other choice.</p>
<p>Examples I&#39;ve seen in the wild:</p>
<ol>
<li>Using <a href="https://microservices.io/">microservices</a> for small applications that don&#39;t need to scale</li>
<li>Using <a href="https://www.freecodecamp.org/news/python-lambda-function-explained">Lambda functions</a> unnecessarily</li>
<li>Integrations with analytics that nobody cares about</li>
<li>Usage of <a href="https://graphql.org/">GraphQL</a> when <a href="https://www.codecademy.com/article/what-is-rest">REST</a> would be enough.</li>
</ol>
<h4>c) Do not optimize for scale unless explicitly requested.</h4>
<p>A rookie mistake when building a product is trying to optimize early. Another one is to worry about scalability when there are no plans for scaling. A few days ago when managing a project I faced this problem. One engineer was opposed to a solution based on &quot;what happens when we have thousands of users?&quot;. While this might be a real issue in other projects, in this one it wasn&#39;t. Our user base was set to be under 300 users and that&#39;s it. Our budget was small and we had to focus on the problem at hand instead of worrying about problems that we did not have.</p>
<p>With that in mind, there are a few guidelines I propose for this:</p>
<ol>
<li>Only use technologies your team is comfortable with unless requested. Most technologies can achieve the same goal, but knowledge of each is necessary. I found out that more often than not, the wrong tool with the right knowledge is better than the opposite. Anything we are not familiar with increases the chances of creating bugs. </li>
<li>Do not keep processes that slow you down unless they add real value to the user.</li>
<li>Do not copy functionalities from other projects unless they have the same problem.</li>
</ol>
<h4>d) Be realistic about your scope and plan for it</h4>
<p>When evaluating architecture or functionality, it&#39;s crucial to be realistic about the problem we&#39;re solving. We often confuse short-term solutions with long-term ones. When aiming to provide value now, we must consider future impacts. The goal of Value Engineering is not to focus on today and neglect tomorrow. Instead, we should build what we need for the appropriate timeframe.</p>
<p>It is worth to mention that not every function of the product is visible to the user. Security for instance, you can neglect it and it gan go unoticed. Unfortunately, One day we might face a data breach, and then we lost the trust of every single user. We should consider non-functional requirements as part of the scope and we should be plan for it. With that said, different non-functional requirements will be necessary in different steps of the project. We should avoid from trying to predict every single one and implement then on day one.</p>
<h4>e) Communicate the tradeoffs to your peers and customers</h4>
<p>Whenever I&#39;m working with other people I have 3 main rules:</p>
<blockquote>
<p>Communicate, communicate communicate</p>
</blockquote>
<p>When working on a team it is hard to have a full view of the impact our decisions will have. Thus it is necessary to communicate the tradeoffs of our solutions to those it will impact. For example, if we decide to build our image processing engine it will mean a lower cost in services, but a higher cost in implementation and maintenance. At the same time, it will provide us with higher control and security of our data. To know if it is worth it, you need to evaluate the budget for the functionality, the timeline, security, and maintenance concerns. Unless you&#39;re the solo engineer in the project, it will be hard to have all the answers you need.</p>
<p>You should always make sure to align the decisions with the parts involved. This will also help in the decision-making when you need to go back and change something.</p>
<h3>Concepts and Thoughts on Value Engineering</h3>
<p>Now that we have concrete examples, let&#39;s try to answer the following questions:</p>
<ol>
<li>What is &quot;Value (software) engineering&quot; and what does it mean?</li>
<li>How does it affect me as a CTO?</li>
<li>How does it affect me as an engineer?</li>
<li>How does it affect timelines?</li>
<li>How does it affect my customers?</li>
</ol>
<h4>1. What is &quot;Value (software) engineering&quot; and what does it mean?</h4>
<p>It is a Software Engineering discipline that focuses on generating value through software. It aims to direct the product development at creating an output in which the user is better thanks to it. It defines that every decision needs to be through the lens of &quot;How does this add value to the final user in the current phase of the application?&quot;. This means that we write every piece of code with the final user and available resources in mind (time is a resource).  While it might sound simple, it is easy to fall into the many traps laid down by our daily distractions and personal decisions.</p>
<h4>2. How does it affect me as a CTO?</h4>
<p>As a CTO, if I focus on only what provides value to my customers, I&#39;m able to maximize their chances of succeeding. Having a successful customer means:</p>
<ol>
<li>A higher likelihood of continuous business thus continuous revenue. </li>
<li>A higher chance of referrals.</li>
<li>A bigger impact on my team&#39;s and users happiness</li>
</ol>
<h4>3. How does it affect me as an engineer?</h4>
<p>As an engineer, it affects the amount of value I provide while writing software artifacts. While this might not be attractive to everyone, it is to me. There are few things I like better than seeing a happy user. By only focusing on what provides value, it also narrows the amount of technology I need to use. One practice I take while building products to maximize their value is to avoid testing new technologies in real-world products. I try to build small POCs and only implement new tooling after I&#39;ve been able to review them. While some new projects require new technology, having this mindset makes me trust less in marketing campaigns and more focused on proving their value before I spend the time implementing them.</p>
<h4>4. How does it affect timelines?</h4>
<p>This is almost a no-brainer. By striving to make simpler solutions, we can (more often than not) deliver them faster. But we need to pay attention to how we do that. Every shortcut we take might mean a longer walk down the road, we need to make sure the customer is ready for that. In my experience, most of the time we never need to take that longer path. With that said, I&#39;ve seen many products fail because such shortcuts weren&#39;t taken and the product took too long to get done.</p>
<p>We need to look for a balance here. Because every product is different, an honest conversation with the stakeholders is necessary. The engineering team needs to understand the plans so they can plan the code accordingly.</p>
<h4>5. How does it affect my customers?</h4>
<p>By understanding the product roadmap, but focusing on the problems we have in front of us, we can deliver the best value for our customers. They get what they&#39;ve paid for now, and we deal with future problems they face instead of anxiously writing unnecessary pieces of code.</p>
<p>I&#39;ve seen customers unhappy with how long something takes to be done way too many times. While sometimes they had no clue about how long it takes to build software, many times I&#39;ve seen them being right about it. As engineers and designers, we often overcomplicate solutions because we want what is best for our users, and at the same rate we forget that the users should have something that works rather than having nothing at all.</p>
<p>Even after building applications for over ten years and repeating this philosophy to my ten every single day, I still have to keep reminding everyone that no client has an infinite budget. The real outcome of our work is not amazing design pieces or code or architectures, but usable functionalities in the hands of our users.</p>
<h3>Conclusion</h3>
<p>In summary, Value Engineering is about maximizing the impact of software in an environment where resources are continually constrained. It is more than just a method; it is an art that requires careful consideration of the unique challenges and limitations of each project.</p>
<p>By adhering to the principles discussed here, we cannot guarantee the perfect outcome, but we can certainly achieve far better results than if we ignored them.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Building a minimal OPC UA integration to collect office data with Arduino and Raspberry PI 3</title>
      <id>https://www.kaiomagalhaes.com/blog/building-a-minimal-opc-ua-integration-with-arduino</id>
      <published>2024-04-05T00:00:00.000Z</published>
      <updated>2024-04-05T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <h1>Motivation</h1>
<p>In the manufacturing sector, the usage of OPC UA is very common. Its data extraction and processing capabilities make it one of the best protocols to understand what is happening with the machinery in the factory. A few days ago, I found myself talking to someone responsible for innovation in the manufacturing sector, and he mentioned by passing a simple, but great example of usage of OPC UA. Imagine that you are responsible for preparing plastic for a certain shape you need. While melting it and adding chemical compounds, you need to guarantee a certain temperature and viscosity. If anything is different than expected, the plastic might end up too soft, too stiff, or too brittle. By having proper data visibility, we can have those responsible for the factory floor aware if any of the metrics are off, and if they are, we can show them how to correct it. For example, if the temperature necessary is 67 Celsius, we can notify them if it is above or below, and let them know what should be the correct one.</p>
<p>I loved this problem because it is a simple and direct one, and I started thinking about how I could mimic it with some gadgets I have at home. While looking at my toolset, I found some cheap, but probably more expensive than most sensors.</p>
<h1>The Problem</h1>
<p>Because I don&#39;t have anything close to melting plastic in my office, I decided to tone it down. A close comparison to a thermometer is a light sensor. Both provide numbers that are readings from the real world. I decided to create a device that would:</p>
<ol>
<li>Read the light level of my office using a light sensor and an Arduino</li>
<li>Send the data to an OPC UA client working inside a Raspberry PI</li>
<li>Have the client sending the information to the OPC UA Server</li>
<li>Upload the data to a MongoDB database hosted in MongoAtlas</li>
<li>Show this data in a dashboard</li>
</ol>
<h1>The Solution</h1>
<h2>Let&#39;s Start with the Definitions</h2>
<h3>What is OPC UA?</h3>
<p>OPC UA stands for Open Platform Communications Unified Architecture. It is a machine-to-machine communication protocol for industrial automation developed by the OPC Foundation. Its purpose is to provide a standard way of accessing and collecting relevant information from hardware devices.</p>
<h4>Key Aspects of OPC UA</h4>
<ul>
<li>Platform Agnostic: It is designed to be platform-independent, meaning it can be used in different systems and devices.</li>
<li>Security: It offers built-in security functionalities including certificates for authentication, and encryption for data privacy.</li>
<li>Scalability: Suitable for both small and large applications.</li>
<li>Sophisticated Data Modeling: OPC UA allows the creation of sophisticated data models with its information modeling capabilities. It can represent both the data and the relationships/semantics of the data.</li>
</ul>
<h3>Components of OPC UA</h3>
<h4>Server</h4>
<ul>
<li>Responsibilities: The server&#39;s primary responsibilities include connecting to devices and systems using protocols such as Modbus, TCP/IP, or proprietary protocols offered by the device manufacturers.</li>
<li>Data Collection: It can collect data by using polling mechanisms, where the server requests data from the devices every X seconds, or through subscriptions where the devices notify the server of new data.</li>
<li>Data Processing: Once the data is collected, the server provides mechanisms for processing and storing this data, making it accessible in a structured way.</li>
</ul>
<h4>Nodes</h4>
<ul>
<li>They are the smallest units of data and can represent a device, a data type, or an object with multiple values. For instance, a thermometer device could contain several nodes that would represent: Temperature measurement, measurement unit, status information, methods that could be called upon this device, and others.</li>
</ul>
<h4>Address Space</h4>
<ul>
<li>It represents a set of nodes in an organized manner. For instance, it could represent a set of thermometers of a climate control system in a building.</li>
</ul>
<h4>Services</h4>
<ul>
<li>Services are mechanisms through which clients can perform operations on a server&#39;s address space, such as reading and writing data, monitoring variables for changes, managing subscriptions for event notifications. Examples include discovery services, session services, node management services, data access services, and others.</li>
</ul>
<h2>Gadgets</h2>
<ol>
<li>Arduino Uno or similar</li>
<li>Raspberry PI</li>
<li>Light sensor</li>
<li>Connection cables</li>
</ol>
<h3>Configuration</h3>
<p>Before I get to the configurations, you should know you can find all the code in the following repositories</p>
<ol>
<li><a href="https://github.com/kaiomagalhaes/office-opc-ua-client">Client javascript project</a></li>
<li><a href="https://github.com/kaiomagalhaes/office-opc-ua-client/blob/master/upc-ua-light-sensor.ino">Arduino code</a></li>
<li><a href="https://github.com/kaiomagalhaes/office-opc-ua-server">Server javascript project</a></li>
</ol>
<h2>Arduino</h2>
<h2>Raspberry</h2>
<p>I&#39;m using a basic Ubuntu for RaspberryPI configuration.</p>
<h3>Dependencies</h3>
<p>Let&#39;s start by installing NodeJS</p>
<p>curl -sL <a href="https://deb.nodesource.com/setup_20.x">https://deb.nodesource.com/setup_20.x</a> | sudo bash -
sudo apt-get install -y nodejs</p>
<p>With your Raspberry ready, we need to configure the Arduino and connect it to the Raspberry board</p>
<h3>Arduino</h3>
<h4>1. Setting up your Arduino, this is what we&#39;re going to use:</h4>
<ul>
<li>1x Arduino Board + USB Cable</li>
<li>1x Protoboard</li>
<li>1x 5mm LDR Light Sensor</li>
<li>1x 10kΩ Resistor</li>
<li>5x Jumpers</li>
</ul>
<p><img src="/assets/blog/upcua-arduino/ldr.jpg" alt="LDR light sensor" title="LDR light sensor"></p>
<p>Your Arduino configuration should look like the one below:</p>
<p><img src="/assets/blog/upcua-arduino/circuit.jpg" alt="Circuit" title="Circuit"></p>
<p>Here you can see mine:</p>
<p><img src="/assets/blog/upcua-arduino/real-circuit.jpg" alt="LDR light sensor" title="LDR light sensor"></p>
<h4>2. Upload the following content to your Arduino</h4>
<pre><code class="language-arduino">const int pinoLDR = A0; // pin where the LDR is connected
int readValue = 0;      // variable to store the ADC read value
float voltage = 0.0;    // variable to store the voltage
float lux = 0.0;        // variable to store the estimated lux value

void setup()
{
  // Starts and configures Serial
  Serial.begin(9600); // 9600bps

  // configures the pin with LDR as input
  pinMode(pinoLDR, INPUT); // pin A0
}

void loop()
{
  // reads the voltage value on the LDR pin
  readValue = analogRead(pinoLDR);

  // converts and prints the value in electrical voltage
  voltage = readValue * 5.0 / 1024.0;

  // Simple approximation to convert voltage to lux
  // This formula needs to be calibrated for your specific LDR and setup!
  // Here we use a placeholder formula that assumes linear relation, which is not accurate.
  lux = voltage * 100; // Example conversion, adjust this formula based on your LDR&#39;s characteristics

  Serial.print(&quot;Voltage: &quot;);
  Serial.print(voltage);
  Serial.print(&quot;V\t&quot;);

  // prints the estimated lux value
  Serial.print(&quot;Lux: &quot;);
  Serial.print(lux);

  Serial.println(); // new line

  delay(1000); // waits 1 second for a new reading
}
</code></pre>
<h4>3. Connect your Arduino to your Raspberry PI through a serial cable</h4>
<h3>Server</h3>
<h4>1. Create the project</h4>
<p>mkdir office-opc-ua-server
npm init -y</p>
<h4>2. Install the dependencies</h4>
<p>npm install node-opcua mongodb --save</p>
<h4>3. Create the .env file with the following variables and fill them properly</h4>
<pre><code class="language-bash">MONGO_URL=
DB_NAME=
COLLECTION_NAME=
</code></pre>
<h4>4. Create a file named index.js with the following content</h4>
<pre><code class="language-js">require(&#39;dotenv&#39;).config();

const opcua = require(&#39;node-opcua&#39;);
const { MongoClient } = require(&#39;mongodb&#39;);

const mongoUrl = process.env.MONGO_URL;
const dbName = process.env.DB_NAME;
const collectionName = process.env.COLLECTION_NAME;

(async () =&gt; {
  // Initialize MongoDB client and connect
  const client = new MongoClient(mongoUrl);
  await client.connect();
  console.log(&#39;Connected to MongoDB.&#39;);
  const db = client.db(dbName);
  const collection = db.collection(collectionName);

  // Initialize OPC UA server
  const server = new opcua.OPCUAServer({
    port: 4840,
    resourcePath: &#39;/UA/MyLittleServer&#39;,
    maxConnections: 20,
  });

  await server.initialize();

  const addressSpace = server.engine.addressSpace;
  const namespace = addressSpace.getOwnNamespace();

  // Add a new object to the server
  const device = namespace.addObject({
    organizedBy: addressSpace.rootFolder.objects,
    browseName: &#39;Arduino&#39;,
  });

  // Add a variable that represents the LuxValue
  namespace.addVariable({
    componentOf: device,
    nodeId: &#39;ns=1;s=the.node.identifier&#39;,
    browseName: &#39;LuxValue&#39;,
    dataType: &#39;Double&#39;,
    value: {
      get: () =&gt; new opcua.Variant({ dataType: opcua.DataType.Double, value: 0 }),
      set: async (variant) =&gt; {
        const luxValue = variant.value;
        try {
          await collection.insertOne({
            nodeId: &#39;ns=1;s=the.node.identifier&#39;,
            luxValue: luxValue,
            timestamp: new Date(),
          });
          console.log(&#39;New Lux value inserted into MongoDB.&#39;);
        } catch (error) {
          console.error(&#39;Error updating MongoDB:&#39;, error);
        }
        return opcua.StatusCodes.Good;
      },
    },
  });

  await server.start();
  console.log(`Server is now listening on port ${server.endpoints[0].port}...`);

  process.on(&#39;SIGINT&#39;, async () =&gt; {
    await client.close();
    console.log(&#39;Disconnected from MongoDB.&#39;);
    process.exit(0);
  });
})();
</code></pre>
<h4>5. From inside the project folder run:</h4>
<p>node index.js</p>
<p>It should be up in a few seconds, and showing the message:</p>
<p>Server is now listening on port 4840</p>
<p>Now your Arduino should be set!</p>
<h3>Client</h3>
<p>Now, back to your Raspberry board, you need to setup both your Client and servers.</p>
<h4>1. Create the project</h4>
<p>mkdir office-opc-ua-client
npm init -y</p>
<h4>2. Install the dependencies</h4>
<p>npm install node-opcua serialport @serialport/parser-readline</p>
<h4>3. Create a file named index.js with the following content</h4>
<pre><code class="language-js">const { OPCUAClient, DataType } = require(&#39;node-opcua&#39;);
const { SerialPort } = require(&#39;serialport&#39;);
const { ReadlineParser } = require(&#39;@serialport/parser-readline&#39;);

// Server configuration
const opcuaConfig = {
  endpointUrl: &#39;opc.tcp://localhost:4840&#39;,
  nodeId: &#39;ns=1;s=the.node.identifier&#39;,
};

const serialPortConfig = {
  path: &#39;/dev/ttyUSB0&#39;, // Update to match your Arduino&#39;s serial port
  baudRate: 9600, // Match this to your Arduino&#39;s configured baud rate
};

const port = new SerialPort(serialPortConfig);
const parser = port.pipe(new ReadlineParser({ delimiter: &#39;\n&#39; }));

async function writeToOPCUAServer(value) {
  const client = OPCUAClient.create({ endpointMustExist: false });

  try {
    await client.connect(opcuaConfig.endpointUrl);
    console.log(&#39;Connected to the OPC UA server.&#39;);

    const session = await client.createSession();
    console.log(&#39;OPC UA session created.&#39;);

    const statusCode = await session.writeSingleNode(opcuaConfig.nodeId, {
      dataType: DataType.Double,
      value: value,
    });

    console.log(`Write operation status code:`, statusCode.toString());

    await session.close();
    await client.disconnect();
    console.log(&#39;Disconnected from the OPC UA server.&#39;);
  } catch (error) {
    console.error(&#39;Failed to write to OPC UA server:&#39;, error);
  }
}

// Event listener for data received from the Arduino through the serial port
parser.on(&#39;data&#39;, (data) =&gt; {
  console.log(`Data received from Arduino: ${data}`);

  // Regular expression to extract the Lux value
  const luxPattern = /Lux: (\d+(\.\d+)?)/;
  const matches = data.match(luxPattern);

  if (matches &amp;&amp; matches.length &gt; 1) {
    // Convert the extracted string to a floating-point number
    const luxValue = parseFloat(matches[1]);

    // Send the parsed Lux value to the OPC UA server
    if (!isNaN(luxValue)) {
      console.log(`Parsed Lux Value: ${luxValue}`);
      writeToOPCUAServer(luxValue).catch(console.error);
    }
  } else {
    console.error(&#39;Failed to parse Lux value from data.&#39;);
  }
});

console.log(&#39;OPC UA Arduino client initialized and running.&#39;);
</code></pre>
<h4>4. From inside the project folder run:</h4>
<p>node index.js</p>
<p>It should be up in a few seconds, and showing the message:</p>
<p>Server is now listening on port 4840</p>
<h3>Data Visualization</h3>
<p>Now the last, but not least important part, visualizing the data. Because we are using MongoAtlas, the easiest path to see this data live is by using MongoAtlas Charts.</p>
<ol>
<li>Create a chart</li>
<li>Select the Chart Type as Continuous Area</li>
<li>Set the X axis as the timestamp</li>
<li>Set the Y axis as the luxValue</li>
</ol>
<p>It should look like this one <a href="https://charts.mongodb.com/charts-storyforge-bvfdd/public/dashboards/65fb072d-ad57-4237-81e3-de05c32caed4">here</a></p>
<p>Now you have an integration between your Arduino and MongoDB! You can use this flow for any other sensor you want, like temperature, humidity, and so on!</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Lessons learned in the last year building AI products at Codelitt</title>
      <id>https://www.kaiomagalhaes.com/blog/lessons-learned-in-my-last-year-building-ai-products-at-codelitt</id>
      <published>2024-03-19T00:00:00.000Z</published>
      <updated>2024-03-19T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>November of 2022 is a month that will go down in history as the one in which the world was all looking at the same shiny new thing: <a href="https://openai.com/blog/chatgpt">ChatGPT</a>. This revelation captivated the imaginations of countless developers - including myself, as I immediately started building proofs of concept on top of it. In hindsight, I realize the ideas I came up with were the same ideas that most individuals and companies were trying at the time. Previously expensive products were now achievable at a fraction of the cost, making them much more accessible.
Here, I dive into the invaluable experience and lessons I learned during my time at Codelitt.</p>
<h1>Proofs of concept</h1>
<h2>RailsCodeCare</h2>
<p>My very first POC was nothing short of bold; it was a company. As I work in a software as a service company, creating a new service to maintain Ruby On Rails applications would be a great idea. I already had this idea before, but just thinking about the marketing side made me dizzy. That was when I realized I could use this new technology to help me with marketing. The flow of my application was the following:</p>
<ol>
<li>I would send a topic as the title of an email to my bot. It would be something like “How to build data pipelines with Ruby on Rails 7.0”</li>
<li>The bot would receive the email and transform the title into one that could be better marketing-wise</li>
<li>Use the new title to create a blog post</li>
<li>Upload the well-formatted blog post to my website</li>
</ol>
<p>I understood that the likelihood of Search engines ranking my content would be small, but I also knew that I could learn a ton from building it. Fortunately, what I was suspicious about became a reality. A while ago, Google said that it wouldn’t rank AI-generated content. That said, I believe anyone can use AI-generated content as a draft and create high-quality content from it.</p>
<h2>Tasketeer</h2>
<p>A long-standing problem for big companies is knowledge sharing and retention. Over the years, I lost count of the applications I had built around this topic. The goal was simple - the tool was meant to democratize access to vital company knowledge. Documents and company information should be able to be stored efficiently, and those repositories of documents and information were turned into searchable databases, thus eliminating the everlasting issue of knowledge being forever lost with personnel changes. I brought it up with Codelitt’s CEO, and we decided to build a tool to make this dream a reality. We even presented it at MongoDB.Local in New York. </p>
<p>The user flow was the following:</p>
<ol>
<li>The user creates an account</li>
<li>The user uploads any file they want to make searchable. For instance, I uploaded all of our HR documentation</li>
<li>The user can now ask questions on Slack or Tasketeer’s chat about the content and get valuable answers</li>
</ol>
<p>We first used it internally, and it was amazing to see our HR knowledge available to our team. At some point, we started seeing many big players solving this same problem, like Google’s notebook project, and we decided to open our code for anyone curious about how we built it. You can find it here.</p>
<h2>Distressed property appraiser</h2>
<p>This project was by far the most complex one. The goal was to extract information for property APIs, process it, and use it to evaluate the cost of properties in a specific area. To achieve this, we had to pull information from many different resources, and use the latest options from Google Cloud Provider to process this data and turn it into valuable information.</p>
<h2>Plain text to API JSON filters</h2>
<p>This project was the most significant example of the power of automation using AI. A customer reached out to Codelitt with a simple problem: Their application had over 250 filters from which users had to choose manually. Their goal was to allow their users to specify what they were looking for through a single text input and have the application present the results by way of structured API requests. That means that the application had to:</p>
<ul>
<li>Get the text input value</li>
<li>Send the text value to a “plain text to JSON API”</li>
<li>Get the API result and send it to the filter API</li>
</ul>
<p>This was a straightforward problem, and we solved it using LangChain and Python. It is currently in production, resulting in a better user experience for our customers.</p>
<h3>StoryForge</h3>
<p>This one was more of an “I’ve been repeating this way too many times” situation. After building the first projects, I realized they all had something in common. They all would:</p>
<ul>
<li>Receive an input in plain text through an HTTP request</li>
<li>Match this input with content stored in a vector database</li>
<li>Send the result asynchronously to another API</li>
</ul>
<p>To save time for future cases, I built StoryForge, which is an <a href="https://github.com/codelittinc/story-forge-api">open-source application</a> that does precisely those steps, but in a configurable way. Once the server is up, the developer can:</p>
<ul>
<li>Send any text document supported by Box</li>
<li>Define the context ID, allowing it to have multiple “libraries” or “data sources,” where each library/data source can consist of multiple files</li>
<li>Send a task in plain text, i.e., “Tell me what is our Company’s HR policy” passing the ID of the data source and what kind of prompt it should use</li>
<li>Set the webhook with an identifier, this way the receiver application can identify the request</li>
</ul>
<h1>Lessons learned</h1>
<h2>1. Adding AI to products is easy</h2>
<p>Dealing with AI through APIs has never been easier, but solving hard, real-world problems is still hard. With OpenAI (and now many competitors), processing data with AI is as easy as making an HTTP request, but it is only that simple for easy problems. For instance, building the entire RailsCodeCare flow was easy. It meant using the Ruby library Zapier to read the emails and send the content to my Ruby/Sinatra API, and that was it. Because it wasn’t data-heavy, it was simple. However, for the Distressed property appraiser, it was more complicated. Training an AI model is still costly and time-consuming. Many tools are available that make it easier, but it still took us a couple of months to get this last project done, evin with a highly specialized AI engineer supporting us. The main challenges around building a custom LLM come from two sources: specialized technical resources and the lack of data available for many mundane tasks like appraisals.</p>
<h2>2. It will only get cheaper to build AI products</h2>
<p>Although the goal for any technology is for it to become more accessible and advanced over time, I wasn’t expecting it to happen at this velocity with AI. In 2023 alone, OpenAI created a new model (GPT 4.0) and doubled the amount of tokens available, going up to 128,000. Other companies, such as Anthropic, offer models like Claude 3 that support 200,000 token context windows. While building the “Plain text to API JSON filters,” the context window in OpenAI had 32,000 tokens, which was a challenge for us, as we had to send a description for each of the 250 filters. A month after we released the first version, the limit went to 128,000 tokens, and the price dropped.</p>
<p>I can only expect the context window to become close to unlimited in one or two years, and the costs to decrease by at least one order of magnitude.</p>
<h2>3. Healthy data sources are important</h2>
<p>It is cheaper to have a healthy database than to expect AI to deal with wrong data input. Since AI hit the scene, I’ve seen many companies try to put all their data into AI models, expecting them to return valuable data. Unfortunately, I haven’t seen them find success in any of those cases. AI models can only perform properly when given the minimum data quality level, which will vary depending on your expectations. The essential requirement for any AI-related project is to prepare the data uniformly and meaningfully. For instance, just throwing in an entire SQL database to a model and expecting it to give you insights is the same as doing it for an engineer who doesn’t understand the data. You’ll waste resources, valuable time and get nothing out of it.</p>
<h2>4. Profession replacement isn’t as simple as plugging in an AI model (for most cases)</h2>
<p>I first built an integration with Intercon for a chatbot in 2015. The problem at the time was that the company’s CEO couldn’t pay someone to answer customer’s questions, so he had to answer them himself. Nowadays, that is almost no longer a problem. Many companies offer “The best and only chatbot your company needs, using your documents to answer any customer question”. This looks great, until you realize that you need more documentation, or that it isn’t updated as often as you need. I don’t see myself talking to chatbots frequently, but I can think of two cases this year that made me give up on using a product due to my poor customer experience; on both cases I tried to talk to customer service, and not only was I not given the option to speak to a human, the bot would only return “I don’t have an answer, please try asking differently.”	The money I would spend on these products wouldn’t pay for a customer rep, but I’m sure that the cost of many users bailing on the product because of the lack of one could make a dent. </p>
<p>In another similar situation, I’ve seen this replacement being successful. Self-checkout in markets with the option of paying a person gets the best of both worlds. If I am knowledgeable enough, I can use the self-checkout; if not, I can have someone help me.</p>
<p>This includes replacing engineers with AI. If my only job as an engineer had been writing code, my life would’ve been much easier over the years. But the reality is that besides writing code, I’m also expected to talk to clients, understand their requirements, implement the product, deploy it, monitor it in production, solve bugs, and so on. AI can support me and help me be more efficient with most of these steps, but I don’t see it replacing me - at least not in the next five years.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>The hidden challenges of rebuilding products</title>
      <id>https://www.kaiomagalhaes.com/blog/The-hidden-challenges-on-rebuilding-products</id>
      <published>2024-03-07T00:00:00.000Z</published>
      <updated>2024-03-07T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>At some point, every engineer I&#39;ve ever worked with has said &quot;Let&#39;s rebuild this product&quot;. One day, when evaluating a SaaS product, the feedback I gave my CEO was: &quot;Instead of buying it, we should just build it from scratch. We’ll get more bang for our buck&quot;. Ultimately, we decided neither to buy it nor to build it. However, knowing what I know today, I wouldn&#39;t have been so fast to suggest to rebuilding that. The challenges one faces when building a new product versus when rebuilding an existing one are two completely different challenges. The best comparison is trying to build a brand new house, or rebuilding one while also living inside of it. Now, let me tell you a little bit about what we faced here.</p>
<h2>A background story</h2>
<p>Here at Codelitt, we are about to deliver one of our biggest projects to date. It is an application that has been around for many years. The customer wanted to improve the experience by redesigning and rebuilding the frontend completely. The first version of the application was made of a mix of React and BackboneJS in the front end and .NET in the backend.</p>
<p>The goal was <em>simple</em>:</p>
<ul>
<li>Redesign the entire frontend with over 500 pages and thousands of functionalities.</li>
<li>Implement an architecture to last the next ten years.</li>
<li>Build it fast.</li>
<li>Deploy it incrementally so its users can start seeing the benefits of the new application with its new features.</li>
</ul>
<p>I hope you were able to spot the irony in the word &quot;simple&quot; over there. When I first reviewed this project, I was surprised about its size, and I was expecting it to be a herculean effort. However, things turned out to be far more challenging than I thought. Rebuilding an existing application has way too many hidden challenges that nobody talks about when they are going through a sales cycle. My initial impression was that it was supposed to be easier to build a product from an existing one, but as one of my preferred blog posts of all times is titled: <a href="https://web.archive.org/web/20240218135401/https://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail">Reality has a surprising amount of detail</a>.</p>
<p>As we started the rebuild, we started facing many challenges that nobody warned me about before.</p>
<h3>Easter eggs</h3>
<p><img src="/assets/blog/hidden-functionalities/easter-egg.jpg" alt="Easter Eggs" title="Easter eggs"></p>
<p>The first hidden challenge we found was the amount of easter eggs. When we think about building a new application, we organize the tasks in a way that allows us to achieve the desired behavior. When we are rebuilding one we need to replicate the current behavior. The problem is, what happens when there is no written definition of current behavior? What do you do when there&#39;s no source of truth for the current features?</p>
<p>Well, given the application exists, then the definition of current behavior is in the code. However the project and product managers are not close to the codebase. That means that the engineers are the ones who know what the current application does. What happened is that every other week, we would find a set of functionalities that nobody was aware even existed. Needless to say, it was impossible to get our original estimation right.</p>
<h3>Making it right the second time</h3>
<p><img src="/assets/blog/hidden-functionalities/making-it-right-second-time.png" alt="Making it right the second time" title="Making it right the second time"></p>
<p>Since we are basing a new design on top of an existing one, it ends up being impossible to be unbiased. Quite a few times, we found ourselves asking if we were building it right the second time, but because the current backend worked in a specific way, we had our hands tied. Some functionalities were overly complex, and there was just nothing we could do that wouldn&#39;t make the cost of it prohibitive.</p>
<p>It is like wanting to change the places of the walls in your house without being able to remove the ceiling. More often than not, it&#39;s simply impossible.</p>
<h3>Recreating functionalities go beyond the ticket definition</h3>
<p><img src="/assets/blog/hidden-functionalities/recreating-functionalities.jpg" alt="Recreating functionalities go beyond the ticket definition" title="Recreating functionalities go beyond the ticket definition"></p>
<p>When we are creating a new feature in a product, we usually need to worry about two factors:</p>
<ul>
<li>How the current code is set up</li>
<li>what we need to change. When rebuilding an application</li>
</ul>
<p>When we are recreating a project wee need to add a third factor</p>
<ul>
<li>Understand how the current application does it</li>
</ul>
<p>With this third variable, we can get into all sorts of problems, but I&#39;ll focus on the three biggest ones:</p>
<ol>
<li>The code is written in a different programming language/framework</li>
<li>The code readability is poor</li>
<li>The functionality is overly complex</li>
</ol>
<p>In my scenario, we&#39;ve hit all these three. Due to the fact that the existing application was built over many years, the original team had to work with the technology constraints of their time. They had multiple technologies mixed - and not in a fun way. With that, it became expensive for our team to properly understand the application&#39;s current behavior, and be able to recreate that behavior in the new application. Here, we have a two-sided view of the situation:</p>
<p>a) Since we have an existing code, it makes it <em>easier</em> because we don&#39;t need to spend time thinking about which data flow to follow</p>
<p>b) Since we have an existing code, it makes it <em>harder</em> for us to rebuild the application the right way because we need to follow the current data flow</p>
<p>I ultimately believe that it is a mix of both, depending on the situation. A lack of freedom can be a good thing because it limits our options, and it can also be bad for the same reason. Having the team having reverse-engineer the code in a different framework is also problematic, because it takes lots of time.</p>
<h4>But there is a bright side!</h4>
<p>This was not our first time modernizing an application, and it won&#39;t be the last. Once we recognize these challenges, bringing new life to old user experiences can be done in a timely (and often in a budget) manner. Next I&#39;ll talk about how to successfuly avoid these traps and set your team for success when rebuilding a large application. Stay tuned!</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>On the sublime feeling of deleting code</title>
      <id>https://www.kaiomagalhaes.com/blog/On-the-sublime-feeling-of-deleting-code</id>
      <published>2024-03-06T00:00:00.000Z</published>
      <updated>2024-03-06T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Today while going through <a href="https://thecodelesscode.com/">The codeless code</a> it came to me that my happiest moments in programming are not when I&#39;m writing code, but when I am deleting it. The reason is simple, &quot;no code&quot; means both no maintenance and no bugs. A code that doesn&#39;t need to exist and is removed will not cause problems when migrating to a new version of the programming language or framework. Don&#39;t get me wrong, I understand that there is no such thing as removing complexity just by deleting a piece of code (unless it means removing a functionality altogether). More often than not I find myself deleting lines because I found a better way of handling a flow. I found a library in which smarter people than me found a better way to solve the problem I was facing. </p>
<p>There is no such thing as removing complexity, but there is this sublime feeling of placing complexity where it should be. </p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>How to keep building your engineering skills as a CTO</title>
      <id>https://www.kaiomagalhaes.com/blog/Staying-technical-as-a-CTO</id>
      <published>2024-03-01T00:00:00.000Z</published>
      <updated>2024-03-01T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>One of my first thoughts when I became a CTO was <em>I won&#39;t keep up with the latest technologies anymore</em>. My second one was <em>There must be a way to keep on learning new tools while also managing people, I only need to find out how</em>. It is important to understand that not every CTO wants to continue being an engineer. Besides, depending on the size
of the company, it might be close to impossible for anyone in that position to keep studying while at work. There are differences between <a href="https://www.linkedin.com/in/gustavsoderstrom/">Gustav Söderström</a> who is the CTO of <a href="https://open.spotify.com/">Spotify</a> where they have over 3000 engineers, and myself who is the CTO of <a href="https://codelitt.com/">Codelitt</a> where we have under 40 engineers. Thus my goal here is not to provide a silver bullet, but what works for a small software company.</p>
<p>There are several actions that I do daily to get me closer to the engineering craft:</p>
<h2>Participate in technical discussions on technologies I am familiar with</h2>
<p>Because <a href="https://codelitt.com/">Codelitt</a> is a software house, we have many projects running at any given time in multiple technologies. I do not expect to be versed in each technology out there, but I do expect to keep growing on the ones I&#39;ve worked as an engineer, which are <a href="https://rubyonrails.org/">Ruby on Rails</a>, <a href="https://react.dev/">React</a> and <a href="https://nodejs.org/en">NodeJS</a>. I take every opportunity to join my team&#39;s technical discussions on these technologies. This helps me, not only to understand their reasoning but also to understand how they are making decisions. It is always a pleasure to see engineers debate different points of view and I learn while watching it.</p>
<h2>Build proofs of concept on technologies I want to know more about</h2>
<p>I build proofs of concept to test out new technologies. I don&#39;t need to have a specific goal for them, so anything anecdotal that allows me to test them is game.</p>
<p>Some examples</p>
<ul>
<li><p>As soon as Microsoft released the <a href="https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview-identity">Face API</a> I built an application that
allowed the user to upload a picture. It would recognize the face and replace that with an emoji based on the API review of the person&#39;s feelings. To my surprise,
a couple of weeks after I build this POC, one of our customers requested us to build an application that should allow the user to search on a set of images based on emotion filters, for example: &quot;3 happy people&quot;, &quot;One sad couple&quot; and so on.</p>
</li>
<li><p>Once a customer mentioned that they didn&#39;t know how hard it would be to create an application that allowed the user to edit an image and download it.
The curiosity got the best of me and I built an application that allowed the user to draw on top of any image they would upload.</p>
</li>
<li><p>When I was curious about the <a href="https://www.twilio.com/docs/usage/api">Twillio API</a> I built an application that could send SMS to my friends using Twillio.</p>
</li>
<li><p>Since I got into the university to get my bachelor&#39;s degree in Software Engineering I wanted to understand how to write software for an Arduino. A few months ago I decided to make it happen. I bought some Arduino parts and built an application that would check my calendar and if I was in any meeting it would turn on a &quot;red&quot; light, if I was about to join it would turn on a &quot;yellow&quot; light, and if I am free it would turn on a &quot;green&quot; one. It was fun.</p>
</li>
</ul>
<p>I&#39;ve done way too many proofs of concept, but I believe they give a good idea.</p>
<h2>Newsletters</h2>
<p>For quite a long time I&#39;ve been using Newsletters as my primary source of news in the tech world. I like them because I can choose the topics that interest me the most and I get a curated list about them. As I write this article, these are the ones I&#39;m receiving.</p>
<ul>
<li><a href="web-version?ep=1&lc=c1d7ff82-e834-11ed-aa39-b1c615c68784&p=09b9283c-dad6-11ee-ad26-8d5ef47b0992&pt=campaign&t=1709641341&s=bc9975daec3581b1c3f4a86fd664110062e1fd87cdfdc31701593899023c4841">TLDR - Web Dev</a></li>
<li><a href="https://actions.tldrnewsletter.com/web-version?ep=1&lc=b6691b72-e834-11ed-8d14-15fe90968199&p=3858743c-d6d9-11ee-ac71-e71aa11a084a&pt=campaign&t=1709206602&s=6f6dbec981517cf5613a78fd73182a95cf4f4e705c868936a359627c4b4a3909">TLDR</a></li>
<li><a href="https://www.superhuman.ai/p/apple-reportedly-sacrifices-car-plans-focus-ai">Superhuman</a></li>
<li><a href="https://javascriptweekly.com/issues/676">Javascript weekly</a></li>
<li><a href="https://rubyweekly.com/issues/691">Ruby weekly</a>.</li>
</ul>
<h2>Participate in the kickoff of every project</h2>
<p>The kickoff of a project is a magical moment. Whenever I get a chance I try to help them as it is where the engineers are making the decisions that will haunt them until the end of the project. That is where we define the architecture, tools, and integrations. It is also a great moment to build small proofs of concept to test the technologies that will be used in case any of them is new. In one of these kickoffs I participated, I learned about <a href="https://github.com/pmndrs/zustand">Zustand</a>, <a href="https://tanstack.com/query/v3/">React Query</a> and <a href="https://mui.com/material-ui/">Material UI</a>. Now I can&#39;t live without them.</p>
<h2>Build internal applications to help your team</h2>
<p>Every company has its specifics that are hard to fit into pre-built software. This is an opportunity for a CTO to make all the difference to the team, while also studying and growing. Every time I see an opportunity for automation, I take that chance. I always build everything in the technology I&#39;m most familiar with because I know that a) I&#39;ll be the one maintaining it and b) I&#39;ll build everything alone. In this experience, I&#39;ve come up with a few open-source projects</p>
<ul>
<li><a href="https://github.com/codelittinc/roadrunner">Roadrunner</a></li>
<li><a href="https://github.com/codelittinc/notifications">Notifications</a></li>
<li><a href="https://github.com/codelittinc/backstage-app">Backstage</a></li>
</ul>
<p>These are the ones that are most obvious to me. With that said, the most important action you need to take to keep on learning new technologies is simply to keep looking for opportunities to learn. Every project, every engineer, every conversation is an opportunity to grow.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>On work</title>
      <id>https://www.kaiomagalhaes.com/blog/on-work</id>
      <published>2022-08-24T00:00:00.000Z</published>
      <updated>2022-08-24T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>During a dinner conversation not very long ago, my business partners and I stumbled over the topic:</p>
<p><em>How can some people spend so much time at work?</em></p>
<p>Over the years, my friends have always known me as the workaholic in the group. I had the drive to push myself and constant competitiveness with my peers, yet I rarely saw what I was doing as work.</p>
<p>The reason why I chose software engineering as a profession is because I wanted to build. I didn&#39;t understand what software or a product was, but I knew it was about building. To me, there is a wonder in programming, the feeling of creating something new. Looking back, I couldn&#39;t differentiate working from playing video games. Time would pass very fast, and I would get excited and sometimes frustrated when blocked. It was fun.</p>
<p>With all that said, every once in a while I would receive a visit from the joy killer: <em>A tight deadline</em>. That would often remind me that what I was doing was work.</p>
<p>Having to explain this to those around me felt weird. Growing up I heard that work was about making money, and that&#39;s it. Because I loved my work and would often apply too much of myself to it, I would get shy about explaining the reason.</p>
<p>At some point in that conversation, one of my peers said:</p>
<blockquote>
<p>To me work is like a game and the money in the bank is the score point.</p>
</blockquote>
<p>When I heard that I felt a lightbulb pairing over my head. I was given an easy way to explain my view of my relationship with work.</p>
<p>With all that said, there is more to life than work, even if you are a happy worker. Over the past ten years, I&#39;ve been struggling with work-life balance. Because I work remotely, starting and stopping at specific hours is hard. I am using my personal computer to write this post. If I were using the professional one I would be knee-deep in my email box or Slack by now.</p>
<p>During the past ten years in the software industry, I built many products, but now I want to build the right ones. I found that we all have phases, and I&#39;m past the &quot;I want to build anything&quot; one. I want to believe that now I value my time more. The reason for this change is because family, friends, and free time to think or write are as important to me as my work. Thus, to have enough time for everything I need to learn how to say no to what is not important.</p>
<p>After having that conversation, I&#39;ll now use that same sentence when I don&#39;t want to get too deep into why I work as I do. Yet, whenever I get the chance I&#39;ll explain that I work so money isn&#39;t the goal, but the consequence. I do it because I am lucky enough to grow a little bit every day by doing what I love.</p>
<p>I&#39;ll end this post with a poem that I found a while ago. It&#39;s called &quot;On Work&quot; by Kahlil Gibran. I hope you enjoy it as much as I do.</p>
<pre><code>On Work

Kahlil Gibran
1883 – 1931

     Then a ploughman said, Speak to us of Work.
     And he answered, saying:
     You work that you may keep pace with the earth and the soul of the earth.
     For to be idle is to become a stranger unto the seasons, and to step out 
     of life’s procession, that marches in majesty and proud submission 
     towards the infinite.

     When you work you are a flute through whose heart the whispering of
     the hours turns to music.
     Which of you would be a reed, dumb and silent, when all else sings
     together in unison?

     Always you have been told that work is a curse and labour a misfortune.
     But I say to you that when you work you fulfil a part of earth’s furthest
     dream, assigned to you when the dream was born,
     And in keeping yourself with labour you are in truth loving life,
     And to love life through labour is to be intimate with life’s inmost secret.

     But if you in your pain call birth an affliction and the support of the 
     flesh a curse written upon your brow, then I answer that naught but 
     the sweat of your brow shall wash away that which is written.

     You have been told also that life is darkness, and in your 
     weariness you echo what was said by the weary.
     And I say that life is indeed darkness save when there is urge,
     And all urge is blind save when there is knowledge,
     And all knowledge is vain save when there is work,
     And all work is empty save when there is love;
     And when you work with love you bind yourself to yourself, and to 
     one another, and to God.
    
     And what is it to work with love?
     It is to weave the cloth with threads drawn from your heart, even 
     as if your beloved were to wear that cloth.
     It is to build a house with affection, even as if your beloved 
     were to dwell in that house.
     It is to sow seeds with tenderness and reap the harvest with 
     joy, even as if your beloved were to eat the fruit.
     It is to charge all things you fashion with a breath of your
     own spirit,
     And to know that all the blessed dead are standing about you 
     and watching.

     Often have I heard you say, as if speaking in sleep, “He who works in 
     marble, and finds the shape of his own soul in the stone, is nobler 
     than he who ploughs the soil.
     And he who seizes the rainbow to lay it on a cloth in the likeness of 
     man, is more than he who makes the sandals for our feet.”
     But I say, not in sleep but in the overwakefulness of noontide, that 
     the wind speaks not more sweetly to the giant oaks than to the least of 
     all the blades of grass;
     And he alone is great who turns the voice of the wind into a song made 
     sweeter by his own loving.

     Work is love made visible.
     And if you cannot work with love but only with distaste, it is better that 
     you should leave your work and sit at the gate of the temple and take alms 
     of those who work with joy.
     For if you bake bread with indifference, you bake a bitter bread that feeds 
     but half man’s hunger.
     And if you grudge the crushing of the grapes, your grudge distils a poison
     in the wine.
     And if you sing though as angels, and love not the singing, you muffle man’s 
     ears to the voices of the day and the voices of the night.
</code></pre>
<p>This poem can be found at <a href="https://poets.org/poem/work-4">poets.org</a></p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>On the relation between fear and ignorance</title>
      <id>https://www.kaiomagalhaes.com/blog/on-fear-and-ignorance</id>
      <published>2022-08-24T00:00:00.000Z</published>
      <updated>2022-08-24T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>I&#39;m writing from an airplane, which makes it perfect to talk about fear. Growing up I had this image in my mind that I feared heights. I don&#39;t know if this fear was real or something my childish brain made up. One could argue that any feeling is made up. Fear is the most primitive instinct we animals have. It exists with the goal of helping us stay alive. The more we go back in time the more fear is relevant. To me, the most interesting part about it is its many shapes and forms. Everything can trigger it. A scream, a scare, a high-pitched voice, any kind of bad news. Fear is everywhere and in everything.</p>
<p>The first time I flew in an airplane I was 11 years old. I only remember that I slept for the most part of it. I don&#39;t recall being afraid, I like to think I was cautious. I find it hard to believe I didn&#39;t feel anything, but I wasn&#39;t terrified. I slept so much that my aunt got scared when she struggled to wake me up. Many years later I was traveling again, and suddenly I was terrified. I would hold on to my chair and be tense the entire flight. I&#39;m not sure about what else changed, but I know my perception did.</p>
<p>I still remember when I was 14 and I was working when every tv channel started showing the accident. Two airplanes, a military and a commercial one collided in midair. The latter crashed and everybody that was in it died. Before that, there was an accident when an airplane was landing in Brazil. The road was too wet and there was not enough adherence, it tried to take off again but it was too late. It crashed on a building and more than two hundred people died. Last, a missile hit an airplane a few years ago. The military had mistaken it for a threat. The result, you can imagine.</p>
<p>When I remember those incidents I&#39;m then terrified of flying. I see two patterns: There was a plane, and everybody died. If I isolate these incidents I have enough reasons to never set foot at an airport ever again. No reasonable person would risk being inside a can that is flying a thousand meters off the ground. With that said, isolating events is the worse we can do if we want to get closer to the truth.</p>
<p>Over time I concluded that fear, like any other feeling, requires perspective to exist. You can&#39;t measure a feeling alone, every feeling requires a comparison to existing. As a kid, I feared bullying at school, yet I would face them any day if the other option was to face my mom. This brings me back to flying. I&#39;ve seen a few incidents with airplanes. One of my favorite music groups died in their late twenties when their airplane crashed. Deep down my instinct is to believe that flying is a horrible option because it is unsafe. The only ground below you is what will get you dead.</p>
<p>When I was twelve years old my mom&#39;s cousin, her husband, and her son died in a car crash. When I was eighteen the director of my school died with four friends in a similar event. My mom almost lost her left foot when her motorcycle crashed with a car. I got close to dying when my motorcycle crashed with a car several years ago. Every day people die in car accidents, but I can only remember a handful of airplane accidents in my life.</p>
<p>When I compare these situations it brings me perspective. I don&#39;t fear going to a party in my car, or getting back home from the airport. Yet, there is a bigger chance of me losing everything in those scenarios than in flying. When I put it all in context, I still fear flying, but now I can face it.</p>
<p>While I&#39;m here writing about life and death scenarios, I can&#39;t stop but remember that fear isn&#39;t only about it. One could argue that fear is all about survival, but survival has many meanings. To my friend&#39;s wife, it is to be able to buy what she wants when she wants it, to me it is to have food. To a coworker, it is to maintain his lifestyle. Everything that brings risks to these scenarios will trigger fear in these characters.</p>
<p>My problem with this feeling is that, while it was important in the past, right now it makes us stupid. When I see two smart people arguing non-stop, I can&#39;t help but ask myself what they are afraid of. When I get stressed out with my wife, I fail to recognize what I&#39;m afraid of.</p>
<p>One day a work colleague referred someone to work in my team, but I didn&#39;t like the person for the position. That day I struggled to sleep because I couldn&#39;t stop thinking about it. I didn&#39;t want to let this colleague down, I also couldn&#39;t hire his referral. Thus I found myself in conflict. There was no easy way out, and no way without a minimal level of confrontation. At some point, I cracked the problem. I wanted my colleague to feel heard. My fear was to create a barrier between us. My conclusion was that as long as I had a fair justification there was no bad ending for this story. The moment I realized that I was finally able to sleep.</p>
<p>In another instance, I had a coworker causing panic in my team. The moment I saw that I went into rage mode. I couldn&#39;t get myself to think, but I could only go after this person and make sure he would stop. I achieved it, but my personal cost was too high. I felt like I lost control over my own mind. When I review this scenario I often wonder what I was afraid of. In hindsight, I feared for my team&#39;s trust in our company. I feared for their own belief in our mission. I want to believe that next time it happens I will be capable of bringing this perspective and reacting better.</p>
<p>When I compare these scenarios, I conclude that in today&#39;s society, fear is an obstacle. Knowledge and self-understanding are what we should use to survive. It is of utmost importance that we keep our feelings in control and the best way to do it is to keep perspective.</p>
<p>Not long ago I read the <a href="https://www.huffpost.com/entry/life-lessons_b_3758774">5x5</a> framework to react to events in life. I found it interesting because it gives us, again, perspective. It is a good tool to keep our fears in check and keep us grounded. I often had fights with my wife about things that aren&#39;t important. One of our main struggles in life is to understand the difference between what is important from what isn&#39;t.</p>
<p>Today in a conversation with my wife she told me that she had flooded our kitchen. While I was trying to understand the situation, I realized that it wasn&#39;t so bad at all. When that happened I found myself angry. I was angry because I feared I couldn&#39;t understand her. In hindsight, I understand that I lost control over myself, yet again. It is too easy to surrender to panic and despair.</p>
<p>My conclusion is that understanding what I fear is to understand my entire self. Irrational fear and knowledge can&#39;t live at the same place. Understanding It doesn&#39;t make me a perfect decision master, but it makes me better at it, and I&#39;ll take any better I can.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>The importance of a design system for the engineering team</title>
      <id>https://www.kaiomagalhaes.com/blog/The-importance-of-a-design-system-for-the-engineering-team</id>
      <published>2020-05-06T00:00:00.000Z</published>
      <updated>2020-05-06T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>During one of my recent job interviews with a frontend engineer, he asked me &quot;Will I be responsible for designing the application as well?&quot;. My first reaction was one of shock, at first, I couldn&#39;t understand if it was a real question. After a couple of seconds, I realized that this is actually a common bad practice. Asking an engineer to design an application is equal to placing a camel on a horse race. The camel can run, but you wouldn&#39;t bet on it to win the race. My answer was that we would always have a designer define the look and feel of the application.</p>
<p>This conversation made me think about the challenges engineers face on a daily basis. When it comes to paying for software, it is more expensive to maintain an artifact than to build it the first time. Which doesn&#39;t make it cheaper to build at first, but worse to maintain. Still, we can take action to improve the maintainability of a frontend artifact.</p>
<p>One big issue I&#39;ve faced in the past was the lack of proper design planning. The story would go like this:</p>
<ol>
<li>I would see a design for a page, and I wouldn&#39;t have an idea of any other page.</li>
<li>I would build let&#39;s say, a button for that page.</li>
<li>Later I would see another page, and discover that there is one more variation of that button.</li>
<li>I would go to my old button and update it to allow for the new variation.</li>
<li>Repeat step 3 and 4 a dozen times.</li>
</ol>
<p>Now imagine these steps happening with all the components of a page. I remember one developer telling me:</p>
<blockquote>
<p>If I knew how many things would be different across these cards, I would have built a generic one from the beginning, but instead, I always assumed we wouldn&#39;t have a new variation, it is 2 am, and here I am creating another one</p>
</blockquote>
<p>Another problem easy to ignore is color naming. Name is so important that the human race created a name for every single known star in the universe. Nowadays if it exists it has a name. Now, imagine the result of having someone who doesn&#39;t have a clue about color handles for naming those. It is the same as asking a painter to name a new species of spider in Latin.</p>
<p>For instance, this happened:</p>
<p><img src="https://raw.githubusercontent.com/kaiomagalhaes/kaiomagalhaes.github.io/master/_posts/images/design-system-1.png" alt="Messy SASS colors example"></p>
<p>This is a good example of what happens when an engineer handles naming colors.</p>
<p>You can check the full file <a href="https://gist.github.com/kaiomagalhaes/0f0043451ca3b4afb5c6065fa0fd3ada">here</a></p>
<p>And last but not least, grids. When it comes to placing components on the screen we always need to think about many screen sizes. We rarely can have two developers building a navbar with the same dimensions for desktop and mobile without giving them instructions about in which proportion it should decrease/increase the size. For instance:</p>
<p><img src="https://raw.githubusercontent.com/kaiomagalhaes/kaiomagalhaes.github.io/master/_posts/images/design-system-2.png" alt="Header example 1">
<img src="https://raw.githubusercontent.com/kaiomagalhaes/kaiomagalhaes.github.io/master/_posts/images/design-system-3.png" alt="Header example 2"></p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>The importance of a design system when building products</title>
      <id>https://www.kaiomagalhaes.com/blog/The-importance-of-a-design-system-when-building-products</id>
      <published>2020-04-26T00:00:00.000Z</published>
      <updated>2020-04-26T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>A few weeks ago while interviewing a developer for a position at Codelitt, he only had one question for me &quot;Would I be the one creating the design?&quot; As we were on a call he may have thought that my video froze, because it took me a while to realize that he was serious about it. I kept wondering myself &quot;Why would a developer ask that?&quot; After I came out of the shock, my answer was a simple one &quot;Of course not, we are not savages here.&quot; He laughed, I laughed, but after the interview was over I couldn&#39;t stop thinking about it. While trying to understand it,  I had flashbacks of similar situations I&#39;ve seen in the past.
Not so long ago, I had a friend that was starting a marketing agency, and she needed a developer. Being the closest reference to the market she reached out to me for guidance about how to hire a software engineer. Aware that she didn&#39;t have any experience with web development, I asked about her plans for the team. At the time I assumed that when it comes to web development it is natural to expect a designer on the team. I couldn&#39;t have been more wrong. When we started discussing the team, and I brought the point about the designer, her answer was
&quot;If the engineer is good he will bring the designer with him or design himself.&quot; - A friend of mine
After I explained how it isn&#39;t like that to her, we agreed to disagree. Needless to say, her agency never kicked off.
By the end of the meeting with the candidate, although I was happy with the fact that nowadays I always have a design by my side, I still wanted an answer to the question
Is a designer in my team enough to guarantee a nice looking output and a maintainable stylesheet?
After some careful consideration, and comparing situations across projects I came to the conclusion that no, having a design in my team isn&#39;t enough. The reason is simple, just adding a specialist to a team doesn&#39;t solve any problem if there is no process behind that. Let me give you an example:
Reviewing one of our oldest projects I came across this jewel. Below you have the colors.sass file</p>
<p>As you may notice, we have quite a few tons of blue, where did the blue3, blue4, and blue5 go?, I don&#39;t know. This situation is a good example of what happens when we leave a set of engineers to name colors. This image above is an excerpt from a file that contains 44 different tons. In this scenario, we always had a designer working together with the engineering team. A possible reason that drove to this situation was that we changed the engineers quite often. On top of this constant change in staff, we also didn&#39;t have a design system for new ones to follow.
When it comes to building pages, there is a lot that engineering can copy from the design system. Color naming is the most obvious, but the same principle applies to componentization. By having the design system we can develop our components with a clear reusability goal. Below, for instance, the first two below are on our website while the third one is on our blog.</p>
<p>And we can even build the variations together at the beginning of the project</p>
<p>In the past, for instance, we had this on one of our projects. In that situation, we didn&#39;t have an idea of how many options we would see for buttons. We also didn&#39;t have a clear picture about how we would reuse them. In the end the engineers had to make decisions about the type of buttons, and how they would behave in a specific rather than a generic way.</p>
<p>Opposed to that, with a design system we can, in the beginning of the project, define a good reusability base for the key components.</p>
<p>Having a design system we can guide our engineers better, direct our efforts in speed in the long run. After we build the basic components, more often than not we will end up reusing them rather than creating one for each specific case. This way we can focus on building features that the users will love all across the board without having to remake each one from scratch.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Sensitive data storage made easy</title>
      <id>https://www.kaiomagalhaes.com/blog/Sensitive-data-storage-made-easy</id>
      <published>2017-06-16T00:00:00.000Z</published>
      <updated>2017-06-16T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Here at Codelitt, our projects range from web applications to robotics to augmented reality. However, inside all of those areas, there is a very important and persistent topic: Security. When you’re thinking about security, there are a lot of factors to think about. Are we talking about server access? Information traffic? Information storage? The list goes on and on and on. Security is always top of mind for us and we just released an enterprise security product, <a href="http://www.bovedahq.com">Boveda</a>, that helps non technical people send sensitive data to others.</p>
<p>Today, I’m going to talk about a really great service that we use to store sensitive data on a server. Vault, is an awesome tool to store key/values that we mainly use for our env variables like API keys and passwords. Vault has really good documentation, <a href="https://www.vaultproject.io/intro/getting-started/install.html">found here</a>.
Let’s learn how to setup a Linux server with Docker and Docker-compose, and utilize Vault.</p>
<p>If you are setting up a new server, take a look at our <a href="https://github.com/codelittinc/incubator-resources/blob/master/best_practices/servers.md">server security practices</a> and remember to <em>not</em> set it up with the root admin.</p>
<p>I also strongly recommend you use  <a href="https://github.com/kaiomagalhaes/incubator-resources/blob/master/best_practices/servers.md#2-factor-authentication">2factor authentication</a>. It’s a bit tricky, but it’s very worth it.</p>
<p>You will need Docker and Docker-compose installed, so if you don&#39;t have it take a look at this <a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-14-04">great tutorial</a> provided by <a href="https://www.digitalocean.com/">Digital Ocean</a>.</p>
<p>We are going to set it up with the <code>deploy</code> user</p>
<p>1 - Before start with the containers let&#39;s setup the configuration. Put the following content in <code>/home/deploy/vault/vault.config</code></p>
<pre><code>listener &quot;tcp&quot; {
  address = &quot;0.0.0.0:9000&quot;
  tls_disable = 1
}

backend &quot;consul&quot; {
  address = &quot;consul:8500&quot;
  path = &quot;vault&quot;
}
</code></pre>
<p>2 - As we are using Docker, the initial setup is very simple. We are going to use a Vault image, which you can find the source <a href="https://github.com/cgswong/docker-vault">here</a>. Put the following content in your <code>docker-compose.yml</code></p>
<pre><code>version: &#39;2&#39;
services:
  # Vault
  consul:
    container_name: consul
    image: progrium/consul
    restart: always
    hostname: consul
    ports:
      - 8500:8500
    command: &quot;-server -bootstrap&quot;

  vault:
    container_name: vault
    image: cgswong/vault
    restart: always
    volumes:
      - &#39;/home/deploy/vault/vault.config:/root/vault.config&#39;
    ports:
      - 8200:9000
    environment:
      VAULT_ADDR: &#39;http://0.0.0.0:9000&#39;
    cap_add:
      - IPC_LOCK
    depends_on:
      - consul
    command: &quot;server -config /root/vault.config&quot;
</code></pre>
<p>We are going to use Consul as our secret back-end, you can find more info about it in the <a href="https://www.vaultproject.io/docs/secrets/consul/index.html">Vault Docs</a>.</p>
<p>3 - Let&#39;s run the containers now. run: <code>docker-compose up -d</code></p>
<p>Congratulations! now you have Vault working!</p>
<p>Now to enable it you need to follow the steps below:</p>
<ol>
<li>enter the container with the command</li>
</ol>
<pre><code>docker exec -it vault bash
</code></pre>
<ol start="2">
<li>run</li>
</ol>
<pre><code>vault init
</code></pre>
<p>The response should be something like this:</p>
<pre><code>bash-4.3# vault init
Unseal Key 1: 6d64855dfbcd93654a191aca77e2863a568a0a6444556958837fd526511f7d5b01
Unseal Key 2: 615ec3594ad1446ed84712d6ff570df0ead89c9a38c13a93f9b0a1286e50ed9c02
Unseal Key 3: 5c2a239676337e6ebd36441ba2cae5de949221d49f3fd6571750324ed51a78a403
Unseal Key 4: 0522dd3c3f4c3a5f2e2f15e53c0b3e114ff6d2e1407d0bc4093cd636702d328c04
Unseal Key 5: 38563df303ae005f4b5e43286196d63f31bc6fafe783e700e7dc4550cb67a7b405
Initial Root Token: 33d9d440-202e-6a0c-7cc8-ccc63aa6f66b

Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the Vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.

Vault does not store the master key. Without at least 3 keys,
your Vault will remain permanently sealed.
</code></pre>
<p>Now before anything else, you should save the unseal keys that you got during the last step, (as without them you can&#39;t store or retrieve any key), and make sure to store it in a safe place where no one can access it and that you won&#39;t lose for any reason. If you need to send them to an organizational partner you can use <a href="https://www.bovedahq.com/">Boveda</a>.</p>
<p>At this point you should have Vault up and running. Let’s check it’s status. In your browser make sure to replace for your server: <code>http://yourserver:8200/v1/seal-status</code></p>
<p>You should see the response:</p>
<pre><code>{
  errors: [
    &quot;Vault is sealed&quot;
  ]
}
</code></pre>
<p>Which means that it works but it is sealed, let&#39;s unseal it.</p>
<ol>
<li>Enter the container with the command</li>
</ol>
<pre><code>docker exec -it vault bash
</code></pre>
<ol start="2">
<li>Run <code>vault unseal</code></li>
<li>It is going to ask for a key, you can pass any of the keys it provided to you before, for each time you need to run <code>vault unseal</code>, you need to do it 3 times.</li>
</ol>
<p>After the third one you will see the response:</p>
<pre><code>Sealed: false
Key Shares: 5
Key Threshold: 3
Unseal Progress: 0
</code></pre>
<p>Which means that now you can store and retrieve keys with safety!</p>
<p>If you already have a secured server you can start saving your keys, otherwise you should go to <a href="https://www.codelitt.com/blog/nginx/">this post</a> where we teach about how to setup Nginx with free SSL and Docker.</p>
<p>A good security practice for this kind of application is to limit the access to the Vault port to be open to the application that will store/fetch the keys only. In order to do this, you can either user your server application firewall (like aws) or you can run:</p>
<p><code>iptables -I PREROUTING 1 -t mangle ! -s your_application_ip -p tcp --dport 9000 -j DROP</code></p>
<p>Since most of the data that you plan to store in Vault is probably sensitive, bear in mind that a chain is as strong as its weakest link, so even with Vault if your server isn&#39;t properly set up then your information isn&#39;t safe.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Free SSL with Docker and NGINX</title>
      <id>https://www.kaiomagalhaes.com/blog/A-Free-SSL-with-docker-and-nginx</id>
      <published>2017-05-22T00:00:00.000Z</published>
      <updated>2017-05-22T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Here at <a href="https://www.codelitt.com">Codelitt</a> we use NGINX as our proxy server. We used to install it on the server and run the applications with docker and docker-compose. As we strive to have a configuration that isn&#39;t server based now we are using it a bit differently. We now install NGINX inside of a container on a server.</p>
<p>In this post i&#39;m going to show how to config an environment with containerized applications and NGINX as a proxy server.</p>
<p>As we are using Docker containers, we need to start writing the docker-compose. Create a file named <code>docker-compose.yml</code> on your server with the following content:</p>
<pre><code>version: &#39;2.0&#39;
services:
  nginx:
    image: nginx
    container_name: nginx
    restart: always

    ports:
     - &#39;80:80&#39;
     - &#39;443:443&#39;

    volumes:
     - /etc/nginx-docker/:/etc/nginx/
</code></pre>
<p>Some details you may want to know:</p>
<ol>
<li><p>As we have set the <code>restart: always</code> If for some reason the server restarts the nginx container will start with it.</p>
</li>
<li><p>We are linking doors <code>80</code> and <code>443</code> with your server, so every connection to these ports will redirect to the NGINX container.</p>
</li>
<li><p>As you can see the configuration files will be on a host folder, here we use: <code>/etc/nginx-docker</code> but you can use any other you want.</p>
</li>
</ol>
<p>Inside this folder you need to add the files that you can find <a href="https://github.com/kaiomagalhaes/nginx-docker-configuration">here</a>.</p>
<p>the steps are:</p>
<pre><code>git clone https://github.com/kaiomagalhaes/nginx-docker-configuration.git
mkdir -p /etc/nginx-docker/
cp -r nginx-docker-configuration/* /etc/nginx-docker/
rm -rf nginx-docker-configuration/
</code></pre>
<p>Now going to <code>/etc/nginx-docker/conf.d/default.conf</code> you are going to see that the <code>default.conf</code> is empty, fill it with the following content:</p>
<pre><code>upstream my-application {
  least_conn;
  server app:4000 max_fails=3 fail_timeout=20 weight=10;
}

server {
    listen 80;
    server_name YOUR_SERVER_NAME
    location / {
        proxy_http_version 1.1;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection &#39;upgrade&#39;;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://my-application;
    }
}
</code></pre>
<p>This is a simple NGINX config, if you check on line 3 you are going to see a host with a port. In this case the <code>app</code> is the name of the container of your application. You need to assure that the Nginx container is going to be is on the same docker-compose network of the application and the simplest way is to have everything on the same dockerfile, like:</p>
<pre><code>version: &#39;2&#39;
services:
  nginx:
    image: nginx
    container_name: nginx
    restart: always

    ports:
     - &#39;80:80&#39;
     - &#39;443:443&#39;

    links:
      - app

    volumes:
     - /etc/nginx-docker/:/etc/nginx/

  db:
    image: postgres
    container_name: myapp-db

  app:
    container_name: myapp-app
    stdin_open: true
    build:
      context: /path/to/myappfolder
      dockerfile: Dockerfile.production

    volumes:
      - /path/to/myappfolder:/share

    ports:
      - &#39;3000:3000&#39;

    depends_on:
      - db

    links:
      - db
</code></pre>
<p>Bear in mind that if your application isn&#39;t running on the port <code>3000</code> you need to change the docker-compose.yml on the app section. You need to make sure to change the dockerfile path to a real one. If you want an app to test it we have a <a href="https://github.com/codelittinc/rails-5-base-project">Rails 5 base project</a></p>
<pre><code>docker-compose -f docker-compose.yml up -d
</code></pre>
<p>Cool, yours containers are up and running!</p>
<p>If you don&#39;t need to use SSL you can stop here. Goodbye and <a href="http://img14.deviantart.net/c2f0/i/2013/337/7/4/so_long__and_thanks_for_all_the_fish__by_acidbetta-d6ung6t.jpg">thanks for all the fish</a>.</p>
<p>So if you are still here is because you are a smart person that cares about security, as a reward for your hard work I&#39;m going to teach you how you can get SSL for only 12 installments of 0.00 USD (which is known as free)!</p>
<p>As you are setting up a custom server, we are going to use <a href="https://letsencrypt.org/getting-started/">Let&#39;s Encrypt</a></p>
<p>Before starting you need to have a domain for the certificate, so if you don&#39;t have go ahead and get here, I will be waiting.</p>
<p>Got it? Great! let&#39;s follow the steps:</p>
<p>1 - if you have nginx running stop it using <code>docker stop nginx</code>
2 - run</p>
<pre><code># download the installer
wget https://dl.eff.org/certbot-auto
# allow it to be an executable
chmod a+x certbot-auto
# install
./certbot-auto
</code></pre>
<p>if you are using ubuntu 14.4 you may have an issue running certbot, if so try  to  export the following variables and run it again:</p>
<pre><code>export LC_ALL=&quot;en_US.UTF-8&quot;
export LC_CTYPE=&quot;en_US.UTF-8&quot;
</code></pre>
<p>3 - open the 443 port</p>
<pre><code>/sbin/iptables -A INPUT -m state --state NEW -p tcp --dport 443 -j ACCEPT
</code></pre>
<p>4 - create the nginx certs folder</p>
<pre><code>mkdir /etc/nginx-docker/certs
</code></pre>
<p>5 - run</p>
<pre><code>./certbot-auto certonly --standalone -d YOUR DOMAIN
</code></pre>
<p>6 - type your email</p>
<p>7 - decide if you want to share your email or not</p>
<p>8 - copy then to your certs folder, see that you need to update the following path your your server path:</p>
<pre><code>cp /etc/letsencrypt/live/YOURSERVER/* /etc/nginx-docker/certs/
</code></pre>
<p>At this point you have your fresh new certificates, but Nginx don&#39;t know that they exist, so lets update the configuration.</p>
<p>run:</p>
<pre><code>vim /etc/nginx-docker/conf.d/default.conf
</code></pre>
<p>and update it with the following content:</p>
<pre><code>upstream my-application {
  least_conn;
  server app:3000 max_fails=3 fail_timeout=20 weight=10;
}

server {
    listen 80;
    server_name YOUR_SERVER_NAME
    return 301 https://$host$request_uri;
}

server {
    listen 443 default_server;
    server_name YOUR_SERVER_NAME
    ssl         on;
    ssl_certificate       /etc/nginx/certs/fullchain.pem;
    ssl_certificate_key       /etc/nginx/certs/privkey.pem;

    location / {
        proxy_http_version 1.1;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection &#39;upgrade&#39;;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_pass http://my-application;
    }
}
</code></pre>
<p>Start your nginx server <code>docker start nginx</code></p>
<p>Congratulations! You&#39;ve finished your SSL configuration.</p>
<p>This is just one of multiple ways to have a SSL certification. It is highly recommended for any kind of web application, it not only makes your site secure, but in addition, it lets your users know that you care about their data.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>Automated deployments with CircleCI and Docker</title>
      <id>https://www.kaiomagalhaes.com/blog/Automated-deployments-with-Circle-and-Docker</id>
      <published>2017-02-16T00:00:00.000Z</published>
      <updated>2017-02-16T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Automate your deployments with CircleCI, Docker and a Linux server.
TL;DR <a href="#soletsgo">go to the tutorial</a></p>
<p>Here at Codelitt, we believe that a fast feedback loop is the best way to ensure that we deliver great products. Since the client has deep market knowledge, we want their validation through each step in the process of bringing a product to life. This feedback loop starts before development begins, and goes on until the product is delivered, with no breaks in between. Working this way allows us to minimize the time required to make further changes, as we rarely need to “go back and change everything”.</p>
<p>In this tutorial, we are using AWS, but it will work with any Linux web server. First, a small description about the 3 main tools we are using:</p>
<h5>CircleCI</h5>
<p>An integration tool that allows us to spin up a deployment system with a couple clicks, no configuration needed. It offers a safe way to accomplish the four main steps of deployment:</p>
<ol>
<li>Gather dependencies</li>
<li>Run tests</li>
<li>Run lint</li>
<li>Deploy</li>
</ol>
<h5>Docker Compose</h5>
<p>Compose  is a tool for defining and running multi-container Docker applications. Currently, we are using Docker based container deployments to assure consistency between development and production environments. We’ve been using it for around two years now, and we really love it.</p>
<h5>Server</h5>
<p>This tutorial will work in any Linux Ubuntu 14+ servers.</p>
<h3>So let’s get started!</h3>
<p>First, we need to prepare our CircleCI environment variables with our application specificities. Below you see the variable name and a description for each, add them in your CI project with the proper values.</p>
<pre><code>DOCKERHUB_COMPANY_NAME
    As we are working with docker we need it in order to prepare the application image path.

DOCKER_EMAIL
    The deploy user dockerhub email

DOCKER_PASS
    The deploy user dockerhub password

DOCKER_USER
    The deploy user dockerhub user

PROD_DATABASE_NAME
    It is the name of your production app&#39;s database

PROD_DATABASE_PASSWORD
    It is the password of your production app&#39;s database

PROD_DATABASE_USER
    It is the user of your production app&#39;s database

PROD_DEPLOY_HOST
    It is the host IP of your production app&#39;s server

PROD_DEPLOY_USER
    It is the user of your production app&#39;s server

PROJECT_NAME
    Make sure to not use any spaces here, we are going to use it for the image deployments
</code></pre>
<p>For continuous deployment on your server, you need to make sure that you organize the connection between CircleCI and your server. We recommend using user-key based connections, but you can use use-user password as well. If you are using the user-key one you need to add your private key to your CircleCI project, which you can learn how to do <a href="https://circleci.com/docs/github-security-ssh-keys/">here.</a></p>
<p>This is all the configuration you need to do on the CI, from now on everything is done in your project files. For this tutorial we are using a Rails 5 Ruby on Rails project, which happens to be our base project and...it is open source! You can find it <a href="https://github.com/codelittinc/rails-5-base-project">here.</a></p>
<p>First let&#39;s organize the <code>circle.yml</code> file</p>
<pre><code>machine:
  ruby:
    version: &#39;2.3.3&#39;
  services:
    - docker
dependencies:
  pre:
    - gem install bundler
database:
  override:
    - sed -i &quot;s/PROJECT_NAME/$PROJECT_NAME/g&quot; config/database.ci.yml
    - mv config/database.ci.yml config/database.yml
    - bundle exec rake db:create db:schema:load --trace

test:
  override:
    - bundle exec rspec
deployment:
  qa:
    branch: /.*/
    commands:
      - cp Dockerfile.production Dockerfile
      - cp env.example .env
      - sed -i &quot;s/POSTGRES_USER=/POSTGRES_USER=$QA_DATABASE_USER/g&quot; .env
      - sed -i &quot;s/POSTGRES_PASSWORD=/POSTGRES_PASSWORD=$QA_DATABASE_PASSWORD/g&quot; .env
      - sed -i &quot;s/DATABASE_NAME=/DATABASE_NAME=$QA_DATABASE_NAME/g&quot; .env
      - docker build -t codelittinc/rails-base-project:latest .
      - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
      - docker push codelittinc/rails-base-project:latest
      - sed -i &quot;s/NETWORK_NAME/$DOCKERHUB_COMPANY_NAME/g&quot; bin/deploy.sh
      - sed -i &quot;s/DOCKERHUB_COMPANY_NAME/$DOCKERHUB_COMPANY_NAME/g&quot; config/docker-compose.yml.template bin/deploy.sh
      - sed -i &quot;s/PROJECT_NAME/$PROJECT_NAME/g&quot; config/docker-compose.yml.template bin/deploy.sh
      - sed -i &quot;s/PROJECT_NAME/$PROJECT_NAME/g&quot; config/docker-compose.yml.template bin/deploy.sh
      - NETWORK_NAME=QA_NETWORK_NAME DEPLOY_USER=$QA_DEPLOY_USER DEPLOY_HOST=$QA_DEPLOY_HOST VERSION=latest sh bin/deploy.sh

  production:
    tag: /version-.*/
    commands:
      - cp Dockerfile.production Dockerfile
      - cp env.example .env
      - sed -i &quot;s/POSTGRES_USER=/POSTGRES_USER=$PROD_DATABASE_USER/g&quot; .env
      - sed -i &quot;s/POSTGRES_PASSWORD=/POSTGRES_PASSWORD=$PROD_DATABASE_PASSWORD/g&quot; .env
      - sed -i &quot;s/DATABASE_NAME=/DATABASE_NAME=$PROD_DATABASE_NAME/g&quot; .env
      - docker build -t codelittinc/rails-base-project:$CIRCLE_TAG .
      - docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS
      - docker push codelittinc/rails-base-project:$CIRCLE_TAG
      - sed -i &quot;s/NETWORK_NAME/$DOCKERHUB_COMPANY_NAME/g&quot; bin/deploy.sh
      - sed -i &quot;s/DOCKERHUB_COMPANY_NAME/$DOCKERHUB_COMPANY_NAME/g&quot; config/docker-compose.yml.template bin/deploy.sh
      - sed -i &quot;s/PROJECT_NAME/$PROJECT_NAME/g&quot; config/docker-compose.yml.template bin/deploy.sh
      - sed -i &quot;s/PROJECT_NAME/$PROJECT_NAME/g&quot; config/docker-compose.yml.template bin/deploy.sh
      - NETWORK_NAME=QA_NETWORK_NAME DEPLOY_USER=$PROD_DEPLOY_USER DEPLOY_HOST=$PROD_DEPLOY_HOST VERSION=$CIRCLE_TAG sh bin/deploy.sh
</code></pre>
<p>In this CircleCI file we are organizing the deployments based on branches. If you do any commit in any branch and push it to the remote repo, it is going to deploy to the QA server, and if you generate a tag it is going to deploy on the production server. <em>(Note: Normally for our projects, we won&#39;t build and deploy upon every commit. We&#39;ll set a specific branch designated as the QA branch, a specific branch as staging, and release tags go to prod.)</em></p>
<p>You need to make sure that your tag name matches the branch validation: <code>/version-.*/</code> if you use a different pattern just change the regex.</p>
<p>Another important thing to keep in mind is that you need to setup your dockerhub namespace so where you see <code>codelittinc/rails-base-project</code> you need to update for your own.</p>
<p>Also, if you check the database section you are going to see that we use a special database.yml, which is necessary because we need to deploy with this same file, which has the following content and should be placed at <code>config/database.ci.yml</code></p>
<pre><code>default: &amp;default
  adapter: postgresql
  encoding: unicode
  host: localhost
  pool: 5
  user: postgres
  password: postgres

development:
  &lt;&lt;: *default
  database: rails-base-test

test:
  &lt;&lt;: *default
  database: rails-base-test

production:
  &lt;&lt;: *default
  host: PROJECT_NAME-db
  database: &lt;%= ENV[&#39;DATABASE_NAME&#39;] %&gt;
  password: &lt;%= ENV[&#39;POSTGRES_PASSWORD&#39;] %&gt;
  user: &lt;%= ENV[&#39;POSTGRES_USER&#39;] %&gt;
</code></pre>
<p>For both deployments, we generate a Docker image, which we build and push to Docker Hub ( see? here is where we are using those credentials ). After pushing the image, we run a deploy.sh file, which has the following content:</p>
<pre><code>#!/usr/bin/env bash

echo &quot;inserting the image version in docker-compose template&quot;
bash -c &#39;sed -i &quot;s/DOCKERHUB_COMPANY_NAME\/PROJECT_NAME/DOCKERHUB_COMPANY_NAME\/PROJECT_NAME:$VERSION/&quot; config/docker-compose.yml.template&#39;

echo &quot;creating projects folder if it doesn&#39;t exist&quot;
ssh $DEPLOY_USER@$DEPLOY_HOST &#39;mkdir -p projects/PROJECT_NAME/config&#39;

echo &quot;copying docker-compose&quot;
scp config/docker-compose.yml.template $DEPLOY_USER@$DEPLOY_HOST:projects/PROJECT_NAME/config/docker-compose.yml.backend

echo &quot;copying env file&quot;
scp .env $DEPLOY_USER@$DEPLOY_HOST:projects/PROJECT_NAME/config/.env

echo &quot;pulling latest version of the code&quot;
ssh $DEPLOY_USER@$DEPLOY_HOST &quot;docker-compose -f projects/PROJECT_NAME/config/docker-compose.yml.backend pull PROJECT_NAME&quot;

echo &quot;creating network if needed&quot;
ssh $DEPLOY_USER@$DEPLOY_HOST &#39;if [ $(docker network ls | grep NETWORK_NAME | wc -l) -gt 0 ]; then echo &quot;network already exists&quot;; else docker network create NETWORK_NAME ; fi&#39;

echo &quot;creating db network if needed&quot;
ssh $DEPLOY_USER@$DEPLOY_HOST &#39;if [ $(docker ps -a | grep PROJECT_NAME-db | wc -l) -gt 0 ]; then echo &quot;db already exists&quot;; else docker-compose -f projects/PROJECT_NAME/config/docker-compose.yml.backend up -d PROJECT_NAME-db ; fi&#39;

echo &quot;starting the new version&quot;
ssh $DEPLOY_USER@$DEPLOY_HOST &#39;docker-compose -f projects/PROJECT_NAME/config/docker-compose.yml.backend up -d PROJECT_NAME&#39;

echo &quot;create database if it doesn&#39;t exist&quot;
ssh $DEPLOY_USER@$DEPLOY_HOST &#39;docker exec PROJECT_NAME bundle exec rake db:create&#39;

echo &quot;running migrations&quot;
ssh $DEPLOY_USER@$DEPLOY_HOST &#39;docker exec PROJECT_NAME bundle exec rake db:migrate&#39;

echo &quot;removing old and unsed images&quot;
ssh $DEPLOY_USER@$DEPLOY_HOST &quot;docker images --filter &#39;dangling=true&#39; --format &#39;{{.ID}}&#39; | xargs docker rmi&quot;

echo &quot;success!&quot;

exit 0
</code></pre>
<p>What we did here is:</p>
<ol>
<li>Prepare the environment variables</li>
<li>Prepare the Docker-Compose file</li>
<li>Deploy the app</li>
<li>Create the database if it doesn&#39;t exist</li>
<li>Run the migrations</li>
</ol>
<p>In order to have it fully automated, we are using a Docker-Compose file template, which should be located at <code>config/docker-compose.yml.template</code> with the following content:</p>
<pre><code>version: &#39;2&#39;
services:
  PROJECT_NAME:
    container_name: PROJECT_NAME
    image: DOCKERHUB_COMPANY_NAME/PROJECT_NAME
    env_file:
      - .env

    ports:
      - &#39;3000:3000&#39;

  PROJECT_NAME-db:
    image: postgres
    container_name: PROJECT_NAME-db
    env_file:
      - .env

networks:
  default:
    external:
      name: DOCKERHUB_COMPANY_NAME
</code></pre>
<p>And now that you have everything set, you can go and focus on what matters, your code. This is just one possible way to automate your builds, we like it because it is simple, fast and you can test/use/adapt it for free.</p>
 ]]>
      </content>
    </entry>
  
    <entry>
      <title>The art of defining products</title>
      <id>https://www.kaiomagalhaes.com/blog/The-Art-Of-Defining-Products</id>
      <published>2016-09-21T00:00:00.000Z</published>
      <updated>2016-09-21T00:00:00.000Z</updated>
      <author>
        <name>kaiomagalhaes</name>
        <uri>https://www.kaiomagalhaes.com</uri>
      </author>
      <content type="html" xml:base="https://www.kaiomagalhaes.com" xml:lang="en">
      <![CDATA[ <p>Software engineering is a cursed profession. Why? Because of the <em>moment</em>:</p>
<p>The moment when you tell someone what you do, and that big smile creeps across their face. They are actually looking for an engineer, they say, because they have a great idea --- no, an incredible idea! One that can make millions! “If we capture only 1% of the market….” Somehow, they find the generosity in their hearts to offer a full 5% of their glorious company for the small task of building it. “Non voting shares, of course.” They truly believe that an idea is all it takes to build a company, but that couldn’t be further from the truth.</p>
<p>Last week a friend asked me for a meeting because he wanted to talk about a big idea that was &quot;insert your big idea here&quot;. He kept saying that his parents and that one friend from college had told him that it was great and he could make a lot of money from it. This is where I start getting excited, because if those people said that,  I know the idea is destined for success.</p>
<p><img src="http://imgs.xkcd.com/comics/business_idea.png" alt="XKCD - https://www.xkcd.com/1721/">
<em><a href="https://www.xkcd.com/1721/">Credit XKCD</a></em></p>
<p>I asked him some important questions, which seemed to make him uncomfortable.</p>
<p>1 - Do you need to build EVERYTHING from the final product vision to validate your idea?</p>
<p>This is the most important question for potential new products. We have the tendency to think that we can&#39;t validate an idea without having all of it done; we spend tons of resources just to see that in the end we’ve wasted our time and money. Entrepreneurs, as a rule, have big visions and dream big. This is not a bad thing; in fact it’s necessary for the long haul. However, you need the ability to narrow your vision to a point where validation is feasible and costs the least amount of both time and money. Another benefit of doing validation on the cheap, is that once we meet some potential customers they can give their feedback and inform product development decisions, which is a golden opportunity.</p>
<p>So, this first question in product development is: What is the cheapest and fastest way you can validate this idea in the marketplace?</p>
<p>2 - Do you have the money to build all of it?</p>
<p>Because of their lack of knowledge in the field, people tend to believe that a piece of software is cheap. They don’t realize that besides the many team members needed to create a successful product, they also need to pay for other things like servers, email services, SaaS required for the business, and so on.</p>
<p>A recurring theory is that if you have an idea, someone is going to put their money behind it. People with no product, no customers, and no background in the industry waste years meeting with VCs and Angels trying to raise money for a product that will never exist. We often say at the Codelitt office, “If you think your idea is worth something, go try to sell it and see what you can get for it”. If you study most of the successful seed stage startup pitch decks, you will notice that most of the time the investment is made into a team, the execution, and in-market validation; the idea is a very small part of an investor’s decision.</p>
<p>3 - How do you plan to enter the market?</p>
<p>I once asked a would-be entrepreneur his marketing strategy for an app he was pitching me. The entrepreneur stared at me incredulously and said “The App Store”. For some people, it’s hard to believe that an app doesn&#39;t magically appear in front of end users and on the front page of TechCrunch. Software that isn’t seen by those who need it, isn’t used. It resides somewhere between the 16th page of Google and the entrepreneurs broken heart also called the internet’s blackhole. You must have a plan for distribution. This isn’t Field of Dreams.</p>
<p>As I spoke with my friend and asked him these questions, here are the answers he gave me in order:</p>
<ol>
<li><p>Yes I have to build the entire vision before launching, because the competitors have very refined products and I am not in a blue ocean.</p>
</li>
<li><p>No I don’t have the money (or the skill) to build it, but I can get people to invest.</p>
</li>
<li><p>I don’t have an idea how to enter the market yet, but it is going to be worked out as soon as we have the finished product to show to some potential clients.</p>
</li>
</ol>
<p>This is completely typical for most people with ideas and a bit of entrepreneurial spirit who haven’t been exposed to startup education and culture that we often take for granted.</p>
<p>Now that we know how <strong>not to</strong> approach the problem of defining and building a product let’s talk about how we should do it.</p>
<p>First you need to validate it internally as something that may be worth your time to look into. We do this by fleshing out our idea (actually several ideas) in the Idea Canvas. This is a trimmed down version of the Business Model Canvas (shown later in article).</p>
<p><img src="https://raw.githubusercontent.com/kaiomagalhaes/blog/master/en/images/image02.png" alt="cartoon"></p>
<p>The Idea Canvas gives you a high level view of your idea, and helps get your thoughts around it on paper. Often when starting, we&#39;ll have several of these for various ideas and approaches for the product.</p>
<p>If the idea still seems valuable to you after filling out the canvas, then it’s time to flesh  things out a bit more and eventually validate it in the market.</p>
<p>The best way to flesh out your business model is by using the well-known lean startup method. It explains how a basic business model is defined and the steps required to take action on it. A good way put it all onto one page is to use the Lean Canvas:</p>
<p><img src="https://raw.githubusercontent.com/kaiomagalhaes/blog/master/en/images/image01.jpg" alt="cartoon"></p>
<p>With this canvas in hand you can define the basic items required for your product’s viability, as well as layout the main obstacles, the revenue streams, and so on. A good book that explains this in detail is <a href="https://www.amazon.com/Running-Lean-Iterate-Works-OReilly/dp/1449305172">Running Lean</a>.</p>
<p>At this point you filled out your lean canvas, you’ve defined your customer, revenue streams and so on. You can take your one-pager and conduct <a href="http://momtestbook.com/">Mom Test</a> interview questions which are aimed at determining if people (even your mother) thinks it&#39;s a crap idea. I don&#39;t want to undervalue speaking to potential customers by my comments earlier. This is a critical process to defining success. However, the methodology of how you should go about it is drastically different than what my friend did. There is a huge difference between telling your close friends and family about this genius idea you have and asking REAL potential customers about their current behaviour, about possible solutions, and getting a commitment. Armed with this knowledge, you need to define your MVP.</p>
<blockquote>
<p>A Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort - Eric Ries</p>
</blockquote>
<p>With an MVP your aim is to build only what you absolutely need to validate your idea, ensuring it is worth the time and money investment you are planning to make. However, don’t fall into the trap thinking that this is only a methodology for software products. You can make one for any kind of product e.g.:</p>
<p><strong>A Boat</strong>
<img src="https://raw.githubusercontent.com/kaiomagalhaes/blog/master/en/images/image03.png" alt="cartoon"></p>
<p><strong>Even  donuts</strong>
<img src="https://raw.githubusercontent.com/kaiomagalhaes/blog/master/en/images/image04.jpg" alt="cartoon"></p>
<p>What do these examples have in common? They start minimal and functional. At first, the boat is very minimal and it seems quite insecure. Do you think that someone would climb into that? I wouldn’t. So that product shouldn’t try to enter the market yet. It needs more improvement, which means more investment.</p>
<p>Can we call the first one an MVP? No, because it isn’t viable to test your idea. If you can’t test the product with real customers in the market then it isn’t an MVP. This is a common misconception. Just because something is minimal does not mean you can validate your idea with a weak product. The second version is way better. It is not the full vision, but it looks cleaner which may call the user’s attention, it seems safer which builds trust (most users don’t know how to swim). If the problem you’re trying to solve is to cross a river then it is a good candidate. You don’t need to build an entire cruise ship just to have something to get you to the other side of that river. What if your users say that they all get sea sick, or while you are building your cruise ship someone just builds a bridge?</p>
<p>Now let’s use a real life example.
<img src="https://raw.githubusercontent.com/kaiomagalhaes/blog/master/en/images/image05.jpg" alt="cartoon"></p>
<p><strong>WhatsApp milestones:</strong></p>
<ol>
<li>February 2009 - Can send messages.</li>
<li>December 2009 - Can send images</li>
<li>February 2014 - Added the phone calls</li>
<li>January 2015 - Web version available for Android</li>
<li>August 2015 - Web version available for iPhone</li>
<li>March 2016 -  Send PDF files</li>
<li>August 2016 - Send gifs enters in beta stage, Yeah…. gifs</li>
</ol>
<p>They focused on the things delivered on their main value proposition instead of creating a full product vision right from the get go. WhatsApp is a messenger, so first it needs to send messages.Everything else is something to add to that main functionality. Most of the time your competitors are going to have your main feature plus some additional ones, so it is not only about getting into the market, but about continually improving to maintain your market share.</p>
<p><strong>Conclusion</strong></p>
<p>When you have an idea or are talking to a possible client keep in mind that most of the time you should narrow down an idea to its core. Validate it first with a proper MVP and then iterate as you gather feedback and learn. This is how we define products here at Codelitt. Our goal is never about getting projects to build, but rather to build great, worthwhile, and needed products. You can&#39;t get there without these steps. While many immersed the startup culture my find these steps obvious, newcomers often don&#39;t. Don&#39;t learn the hard way. Follow the steps laid out by the community and you&#39;ll save yourself a lot of heartache.</p>
 ]]>
      </content>
    </entry>
  
</feed>