Beyond Comfort Zones and Cloud Costs: My April Book Roundup

My April Audiobooks: Cloud costs, Overcoming Comfort Aversion, Time Management, and Intermittent Fasting

Generated image using Gemini

Cloud FinOps, 2nd Edition: This book, is like a goldmine if you’re involved in building or managing apps on the cloud. It’s all about the money side of things – how to keep your cloud costs in check and squeeze the most value out of those resources. Now, I used to work on stuff like Spend Analytics and Tail Spend Management, so this book totally hit the spot for me. It’s super detailed, gives you real-life examples, and basically teaches you how to be super organized (track, monitor and course correct) with your cloud finances. A Must Read for all the engineering managers/Leaders.

The Comfort Crisis: Embrace Discomfort to Reclaim Your Wild, Happy, Healthy Self: This book challenges the idea that constant comfort is the key to happiness. It argues that we actually thrive when we push ourselves outside our comfort zones and embrace new experiences. I liked some of the ideas (Japanese Misogi for Example). However, the way the book was written/narrated wasn’t the most exciting. A topic like this needs to be more gripping.

Life in the Fasting Lane: How to Make Intermittent Fasting a Lifestyle—and Reap the Benefits of Weight Loss and Better Health: Intermittent fasting has been a hot topic lately, right? This book, “Life in the Fasting Lane,” dives into how you can actually make it a part of your everyday life. It’s not just about losing weight, but also about improving your overall health in the long run. Talks about practical tips and tricks on how to fit fasting into daily schedule, so one can reap all the benefits without feeling like constantly restricting oneself. A Must-Read!

Intermittent Fasting: How to Lose Weight, Burn Fat, and Increase Mental Clarity Without Having to Give Up All Your Favorite Foods: Another book on intermittent fasting, this one focuses on the benefits of this lifestyle choice for weight management, mental clarity, and overall well-being. Instead of just focusing on weight loss, it digs deeper into the other potential benefits like clearer thinking and overall better well-being. A small/easy-read, if you’re curious about trying out intermittent fasting and want to understand how it can positively impact your life beyond just the scale.

The On-Time, On-Target Manager: This book is for anyone who is a manager. “The On-Time, On-Target Manager,” is basically a guide to becoming a master of time and getting things done. We all know the struggle – deadlines looming, tasks piling up, and sometimes feeling like we’re constantly playing catch-up. This book offers practical strategies to improve time management skills, set clear goals that everyone understands, and keep the team on track. It’s like having a personal coach showing you how to ditch the last-minute scramble and become a productivity pro!

Happy Learning!

March Must-Reads: Health, Tech & Transformation

Here’s a peek at the books that kept me company in March:

#1: Living a Long, Healthy Life: A Review of “Outlive” by Dr. Peter Attia : Outlive wasn’t just informative, it was a game-changer. This book is getting a permanent spot on my re-read shelf. Dr. Attia challenges the traditional focus on lifespan, urging us to prioritize healthspan – living a long life filled with vitality.

Key Takeaways for Thriving:

  • Early Intervention is Key: Forget waiting for problems to arise. Attia champions a proactive approach to health, emphasizing preventative measures.
  • Conquering the “Four Horsemen”: The book identifies heart disease, cancer, neurodegenerative diseases, and metabolic disorders as major threats. It equips you with strategies to minimize your risk.
  • Exercise is King (and Queen): Science confirms exercise as the ultimate longevity booster. Attia recommends a multi-pronged approach, incorporating aerobic, resistance, mobility, and balance exercises.
  • Food for Nourishment, Not Fads: Attia ditches rigid diets in favor of a balanced approach focused on whole foods. He emphasizes understanding your individual needs – are you undernourished or overeating?
  • Sleep: The Unsung Hero: “Outlive” highlights the importance of quality sleep for overall health. The book provides tips to optimize your sleep hygiene.
  • A Holistic Approach: Attia acknowledges the mind-body connection. The book explores the significant role of stress management and emotional well-being in living a long and healthy life.

I found a very detailed chapter-wise summary here. Worth Bookmarking!

#2. Beyond Healthspan: Exploring Generative AI with Harvard Business Review

This month’s tech pick was Harvard Business Review’s “Generative AI: The Insights You Need.” It delves into the exciting world of Generative AI, a type of artificial intelligence that can create new content, from text and images to even music!

Why It Matters for Businesses:

  • A Transformative Powerhouse: Generative AI has the potential to revolutionize how businesses operate. The book explores how companies can leverage this technology to gain a competitive edge.
  • Making Informed Decisions: While the potential is vast, there’s still some uncertainty surrounding this technology. The book offers guidance on navigating this new frontier and selecting the most suitable generative AI projects for your business.
  • Real-World Examples and Considerations: The book showcases real companies already utilizing generative AI. It also addresses challenges like data readiness and ethical considerations that come with this powerful technology.

#3. Leading the Change: A Look at “Transformed: Moving to the Product Operating Model

Looking to transform the product organization? Marty Cagan’s “Transformed” offers a roadmap for success. This book guides companies through the process of shifting to a product operating model, a crucial step for staying competitive in today’s fast-paced tech world.

Building a Product Powerhouse:

  • Shifting Gears: Move away from IT-centric models and embrace a product-centric approach to drive innovation.
  • Level Up Your Team: Invest in building strong product management, design, and engineering teams. Foster a product-focused culture with clear guiding principles.
  • Empowerment is Key: Forget micromanagement! Effective leadership is about coaching, providing resources, and setting clear direction for your product teams.
  • Adaptability Wins: Don’t get bogged down in rigid processes. Embrace core principles that allow flexibility and responsiveness to change.
  • Charting Your Course: Learn how to assess your current state, develop a strategic transformation plan, and overcome potential roadblocks.

A Reality Check: Is This a Silver Bullet?

No doubt that this book provides valuable insights. However it is not the “Inspired” of product management transformations.

Here’s the thing: real-world change is messy. The book’s short case studies paint a potentially unrealistic picture of a smooth transformation. While the book provides a roadmap, a deeper dive into the trainline’s transformation over several years, including the nitty-gritty – the mistakes made, the challenges, the adjustments course-corrected and the learnings would have been far more beneficial.

Happy Learning!

Code Whisperer: How AI Reads Your Code

Have you ever wondered, how does the LLM powered AI tool like AWS Code whisperer automatically finds out the duplicate code (even though when their names, signatures are not very similar), suggests refactoring recommendations etc?

Enter vector embeddings, a powerful technique that’s transforming how Large Language Models (LLMs) interact with code. Imagine you’re trying to teach a computer different languages. Computers only understand numbers, not our human language. Vector embeddings bridge this gap by converting code (text) into a special numerical code the computer can understand. But it’s not just random numbers. This code captures the meaning and relationships between different parts of your code.

Imagine you have an e-commerce app. When someone searches for “running shoes,” the app uses vector embeddings to understand the intent. Words like “sneakers” or “trainers” might be considered close neighbors, allowing the app to show relevant products even if the exact keyword isn’t used.

So, vector embeddings are like a secret translator between our world of words and the computer’s world of numbers, allowing them to understand the meaning behind the words.

This way, even though the computer isn’t directly understanding the words, it can still learn how words are related to each other based on their numerical codes. This lets computers do cool things like:

  • Recommend products you might like based on what other people with similar tastes bought.
  • Find similar documents or articles even if they don’t use the exact same words.
  • Understand the sentiment of a text, whether it’s positive, negative, or neutral.

Photo by Mika Baumeister on Unsplash

Now applying this concept with code: Each method or code snippet is mapped to a vector in a high-dimensional space, where similar methods reside closer together in that space. This allows you to leverage mathematical operations to determine semantic similarity between methods.

Using Vector embeddings to identify Duplicate Methods for Adding Products

Let’s say there are five seemingly duplicate methods that add products to a cart:

public void addToCart(String productId) {
  // Implementation
}

public void addItemToCart(String productId, int quantity) {
  // Similar implementation (might add quantity check)
}

public ShoppingCart addProduct(String itemId) throws ProductNotFoundException {
  // Implementation might return a ShoppingCart object
}

public void putInCart(String productId, int quantity, String color) {
  // Might handle color selection as well
}

public Boolean addCartItem(String productId, int quantity) {
  // Implementation might return a boolean indicating success
}

These methods might have slight variations in:

  • Parameter names (productId vs. itemId)
  • Number of parameters (productId vs. productIdquantity)
  • Return type (void vs. ShoppingCartboolean)
  • Additional functionalities (handling quantity, color)

By generating vector embeddings for these methods, system can identify methods that are semantically similar, even with these variations. Methods with similar functionalities will have vectors close together in this space, while dissimilar methods will be farther apart.

How does the Similarity calculation works? Tools employs a distance metric like cosine similarity to measure the closeness of vectors. Methods with highly similar vectors are likely duplicates or functionally equivalent.

Here’s a simplified illustration (assuming distances represent similarity):

  • addToCart vs. addItemToCart: 0.98 (High similarity, likely duplicates)
  • addToCart vs. addCartItem (with comments): 0.92 (High similarity, likely duplicates)
  • addToCart vs. addProduct: 0.85 (Similarity, potential variation)

By analyzing these distances, system can flag methods like addToCart and addItemToCart as potential duplicates for further code review and potential refactoring.

Beyond Duplicates: Other Use Cases for Vector Embeddings

But vector embeddings aren’t just for finding duplicates! They can also be used for

  • Code recommendations: suggesting relevant functions or classes based on what you’re working on in your code editor.
  • Automatic documentation: analyze code relationships captured in the embeddings to generate documentation comments automatically.
  • Program repair: By finding similar code patterns, AI can suggest potential fixes for bugs.

Large Language Models (LLMs) like Codewhisperer take vector embeddings to the next level. They use these “code fingerprints” to understand and manipulate your code in amazing ways:

  • Understanding unfamiliar code: An LLM can encounter a new function and, based on its embedding similarity to a known function, explain what it likely does.
  • Context-aware analysis: By analyzing loops and variable usage within their embeddings, LLMs can explain the purpose of code sections.
  • Error detection: LLMs can identify potential errors by spotting mismatches between a function’s expected input (based on its embedding) and the actual input being used.

LLMs can also use embeddings to find and address code duplication beyond simple code blocks:

  • Identifying similar functionalities across classes: If methods in different classes have similar embeddings, the LLM might suggest consolidating or reusing that code.
  • Refactoring recommendations: Based on variable and function embeddings, LLMs can suggest improvements like renaming variables for clarity, extracting reusable functions, or restructuring class hierarchies.

The Future is Embeddings

Vector embeddings are just the beginning. Vector embeddings are a powerful tool that LLMs leverage to understand and manipulate code. As these models continue to evolve, we can expect even more advanced code analysis, debugging assistance, and refactoring recommendations – all powered by the magic of embeddings!

This will play a crucial role in making code development faster, smarter, and more collaborative.

Happy Learning!

Pandora’s Codebox: Unleashing the Power (and Perils) of Generative AI for Software Development

As we speak, The software development landscape is undergoing a significant transformation driven by generative AI.

This change will undoubtedly impact not only business models but also the very foundation of how software is built. This is merely the first step in a larger journey. As generative AI technology matures and becomes seamlessly integrated across the entire software development lifecycle (SDLC), we can anticipate significant advancements in both the speed and quality of the development process.

Imagine a world where product managers craft user stories and mockups faster than you can say “Scrum meeting.” Picture developers with AI sidekicks whispering code suggestions in their ears, like a caffeinated autocomplete on steroids. This isn’t science fiction, this is the future brewing with generative AI, and it’s poised to turn software development into a superhero origin story for applications.

But remember, with great power comes great responsibility. Just like any superhero movie, there are potential villains lurking in the shadows.

The future is uncertain, both exciting and daunting in equal measure.

On the one hand, the potential is limitless. This technology can solve problems we haven’t even imagined yet, except maybe that infamous Bellandur and Silkboard traffic (but you know what, we can dream!).

However, we must tread carefully. Over-reliance and neglecting to consider the future of younger developers are potential pitfalls.

We might even need a dedicated team to navigate the unforeseen risks that come with such immense power.

Charles Darwin famously observed: “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change.” In the ever-changing world of technology, this principle holds true more than ever. Large Language Models (LLMs) represent a significant shift, offering remarkable capabilities, from generating human-quality text to translating languages and writing different kinds of creative content. .

Photo by Bruno Martins on Unsplash

While some may resist embracing LLMs, staying ahead of the curve necessitates proactive exploration of this technology. Inaction puts us at risk of being surpassed by others who can leverage its capabilities to their advantage.

Here’s why developers, testers, product managers, designers, and engineering managers should explore the possibilities of LLMs:

  • The Future is Here: LLMs does not seem to be a passing fad; they are here to stay. Ignoring their potential could put all of us at a disadvantage as others leverage these tools to innovate and gain an edge.
  • Embrace the Wave: Change can be daunting, but it also presents opportunities. LLMs can automate repetitive tasksenhance creativity, and streamline workflows, freeing up time and resources for higher-level thinking.
  • Responsibility and Awareness: While LLMs offer immense potential, they also come with ethical considerations. It’s crucial to approach them with awareness and responsibility, ensuring they are used ethically and responsibly.

In essence, the call to action is clear: Adapt. Embrace. Explore the possibilities of LLMs, navigate the future with a blend of excitement, awareness, and responsibility, and embrace the wave of change that is shaping the technological landscape. By embracing this evolving landscape, we can unlock a future of more efficient, effective, and potentially even superheroic software development.

Happy Learning!

The LLM Revolution: An Engineering Manager’s Guide to Boosting Efficiency

As an Engineering Leader, we all are constantly seeking ways to improve team’s productivity and output. Recently, My teams have been exploring the potential of Large Language Model (LLM) based AI assistants like AWS CodeWhisperer and AWS CodeGuru.

I’ve been sharing my thoughts on this topic in a series of recent posts.

While LLMs hold promise to streamline workflows, it’s crucial to assess their actual impact. In this post, we will try to look at measuring the effectiveness of LLMs in software development:

Before Adoption: Consider These Factors

  • Baseline your team’s Performance: Establish clear baselines for key metrics like velocity, feature completion rate, cycle time, lead time and defect escape rate. These metrics will help you compare performance after LLM adoption.
  • Metrics definition: Clearly define metrics to track, like code completion speed, bug reduction rate, or development velocity. Don’t solely rely on lines of code written. Consider metrics like code quality, reduced context switching, and developer happiness.
  • Project complexity: LLMs might not be equally effective across all projects. Assess the project’s suitability for the specific LLM capabilities. Example, If you use specific set of technologies in your ecosystem, it may not add lot of value.
  • Prioritize: Identify Low-Hanging Fruits in your development process with your metrics. You need smaller wins to get started. Gradually expand to other areas which will give more mileage.
  • Team dynamics: Are your engineers receptive to new technologies? Consider conducting a pilot program with a small group (Controlled Group) first.
  • Gradual Adoption: Don’t overwhelm your team. Start with a small group and gather feedback before scaling up.

Measurable Business Benefits:

  • Reduced development time (faster code completion, bug fixing, and refactoring)
  • Improved code quality (fewer bugs, better code adherence to best practices)
  • Increased developer satisfaction (reduced repetitive tasks, freeing them for higher-level problem-solving)
  • Knowledge transfer and learning (LLMs can suggest approaches or patterns unfamiliar to new developers)

Photo by Andreas Klassen on Unsplash

Striking the Right Balance

While LLMs can offer valuable assistance, it’s vital to maintain a human-centric approach:

  • LLMs are not replacements for developers: They should enhance, not replace, critical thinking, problem-solving, and domain expertise. Do not seed this idea in your leadership team. More on this topic here.
  • Upskilling and Training: Provide training on how to effectively utilize LLMs and interpret their suggestions to avoid over-reliance and potential bias.
  • Code quality and security remain paramount: Developers must carefully review and understand LLM-generated code before integrating it.
  • Continuous learning and adaptation: Monitor adoption, measure impact, and adapt your approach based on ongoing evaluations.

It’s tempting to assume that intuitive tools like LLMs require no training. However, my experience has shown that successful adoption hinges on education. Equipping teams with clear guidelines, practical use cases, and ongoing support is crucial to bridge the gap between potential and reality. Otherwise, the very inconsistencies we hoped to address might persist in new forms.

Examples of Measuring LLM Impact:

TaskTraditional Approach (without LLM)LLM-assisted ApproachMetrics to Track
Reviewing RequirementsManual analysis, potentially missing hidden aspectsLeverage LLM to summarize and identify acceptance cases, potential issues and edge casesTime spent on requirements analysis, number of revisions
Architecture & DesignRelies on individual experience and researchLLM can suggest relevant patterns and architectural styles based on project contextNumber of design iterations, time spent on research, spikes, POCs
Understanding Existing CodeTime-consuming manual code reviewLLM can summarize code functionality and identify potential issuesTime taken to understand existing code, number of questions raised,
CodingManual writing, potentially involving boilerplate codeLLM can suggest code snippets based on comments and contextLines of code written per unit time, reduction in boilerplate code
Code ReviewsPrimarily focused on identifying bugs and logic flawsLLM can highlight potential security vulnerabilities, code style inconsistenciesCode review efficiency, number of identified issues, number of valuable feedbacks, code improvements incorporated
RefactoringManual identification of areas for improvementLLM can suggest alternative code structures and potential performance optimizationsTime spent on refactoring, code readability scores
Optimizing Existing CodeManual profiling and analysisLLM can suggest potential areas for optimization based on code analysisCode performance improvement (e.g., execution time)
Unit Tests & Integration TestsManual test case creation and executionLLM can generate test cases based on code functionalityNumber of test cases automatically generated, test coverage percentage
Identify Issues with CodePrimarily relies on manual testing and code reviewsLLM can assist in static analysis, identifying potential security risks and code smellsNumber of issues identified by LLM, reduction in runtime errors
DebuggingManual debugging process can be time-consumingLLM can suggest potential causes based on error messages and contextDebugging time, number of iterations required to resolve an issue
Log AnalysisManual analysis of log files to identify errors and performance bottlenecksLLM can summarize logs and identify key insightsTime spent on log analysis, accuracy of identified issues
These are some example of what one can track during development. I am sure similar list can be created for Product Management, QA and DevOps. Local Optimization and looking at the improvements from one particular can only give smaller improvements.

Remember, measuring the impact of LLMs is an ongoing process. Experiment, gather data, and adapt your approach to maximize the benefits for your team and your business.

Happy Learning!

Human + AI: The way forward for Developers & Testers in the age of LLMs

Having explored the potential of LLMs to boost efficiency for developers and testers in our previous posts, let us turn our attention to a critical consideration. The cautious introduction of these tools into coding and testing practices. Over-reliance on LLMs can introduce unintended consequences that hinder professional growth and introduce new challenges.

Missed the earlier posts? No worries, here’s a quick recap!

The Pitfalls of Overdependence:

Contextual Understanding: LLMs excel at pattern recognition and text generation, not deep code comprehension. They can suggest code snippets or test cases, but their grasp of the underlying logic and purpose might be limited. This can lead to the introduction of errors or inefficiencies, as the LLM might not account for specific edge cases or nuances of the project.
I am sure you can train the model with your code base. However IMO, it is not for everyone.

Photo by VD Photography on Unsplash

Independent Learning: Overdependence on LLMs for solutions can hinder the development of critical problem-solving and analytical skills. These are essential for becoming a well-rounded developer or tester, able to tackle diverse challenges independently. Constant reliance on LLMs can create a crutch, preventing individuals from developing their own critical thinking abilities and fostering a sense of intellectual stagnation.

Bias and Security Concerns: LLMs are trained on massive amounts of data, which can contain inherent biases. These biases can be reflected in their outputs, potentially leading to discriminatory or unfair testing practices, or the generation of biased code. Additionally, relying solely on LLMs for solutions can create a sense of insecurity in developers and testers, hindering their confidence in their own abilities. Overdependence can also raise security concerns, as malicious actors could potentially exploit vulnerabilities in LLMs to introduce biases or manipulate their outputs for their own gain.

Finding the Synergy: Human + LLM Collaboration:

So, how can you leverage LLMs effectively without falling into the trap of overreliance?

Focus on Guidance, Not Automation: Utilize LLMs for inspiration, alternative perspectives, or generating test cases to cover different scenarios. However, always critically evaluate their suggestions and apply your own judgment. Don’t treat LLMs as a replacement for your own expertise, but rather as a collaborative tool to spark new ideas and broaden your perspective.

Invest in Building a Strong Foundation: Continuously strengthen your core coding and testing knowledge through online courses, tutorials, or books. This equips you to understand the “why” behind solutions, not just rely on the “what” provided by LLMs. A strong foundation in coding principles and testing methodologies allows you to effectively evaluate LLM suggestions and integrate them into your workflow seamlessly.

Practice Independent Problem-Solving: Regularly challenge yourself with coding exercises or test case creation without LLM assistance. This hones your critical thinking and problem-solving skills, crucial for independent work and fostering a deeper understanding of the tasks at hand. Regularly engaging in independent problem-solving builds resilience and confidence, allowing you to approach challenges with a more nuanced understanding.

AI: Friend or Foe?

While LLMs and other AI advancements might automate routine tasks, they are unlikely to completely replace human developers and testers. Their true value lies in augmenting human capabilities, not replacing them.

While LLMs excel at automating repetitive tasks like generating boilerplate code, which can constitute a significant portion (estimated around 60-70%) of development efforts, it’s crucial to remember that the core business logic, accounting for the remaining 30%, still requires human expertise. By mastering the strategic use of LLMs, we can significantly enhance developer productivity without compromising on the crucial aspects that benefit from human judgment and creativity.

By leveraging LLMs strategically, developers and testers can free up their time for higher-level tasks requiring creativity, critical thinking, and human judgment.

Considerations:

Explainability: As LLMs become more complex, ensuring their outputs are explainable and transparent is critical. This allows developers and testers to understand the reasoning behind suggestions and build trust in the LLM’s recommendations.

I see this all the times with Developers today. Developers are unsure of the reasons for recommendations. Infact, some team members call me to say they have found a bug with the tool :).

Explainable LLMs empower users to make informed decisions based on a clear understanding of the rationale behind the suggestions.

Ethical Considerations: As with any AI technology, ethical considerations regarding bias and responsible development must be addressed.
Developers and organizations using LLMs must be vigilant in identifying and mitigating potential biases in the training data and development process to ensure fair and responsible use of the technology. I like AWS codewhisperer in this aspect.

Examples:

  • A developer utilizes an LLM to suggest unit tests for a new function. They review the suggestions and identify potential edge cases not covered by the LLM. They then modify the test cases to ensure thorough coverage.
  • A tester utilizes an LLM to brainstorm potential test scenarios for a user interface. They then use their own experience and knowledge to prioritize the test cases based on risk and user impact, and refine them to ensure they effectively evaluate the functionality and usability of the interface.
  • A team leverages an LLM to generate code comments to improve code readability. However, they verify the accuracy and clarity of the comments before integrating them into the codebase, ensuring they accurately reflect the code’s functionality and intent.

By employing LLMs strategically while focusing on continuous learning and fostering a culture of critical thinking, developers and testers can unlock the true potential of this technology without hindering their own professional growth.

Remember, LLMs are powerful tools, but they are most effective when used in conjunction with human expertise and judgment.

Happy Learning!

Beyond Human Code Reviews: Empowering developers with the power of automated code reviews

We have been talking about rapid release cycles in the last couple of blog posts and how we can enable teams to be efficient to cope with such high pressure environments. Traditional code reviews, while essential for maintaining code quality, can be an absolute bottleneck. It is time consuming. Doesnt matter the number of guidelines you can manually set, human reviewers are still subjective, and their consistency can vary.

This inconsistency can lead to missed errors and vulnerabilities, jeopardizing the security and stability of your applications.

Photo by AltumCode on Unsplash

We were looking at multiple tools for automating this process. A major challenge is the code. We dont want to share the code with other providers for their training.

Enter AWS CodeGuru

AWS CodeGuru offers a solution by automating code reviews and security scans. It leverages machine learning (ML) models trained on vast datasets to identify potential issues in the code. This not only reduces the burden on the fellow reviewers but also improves the consistency and efficiency of the review process.

Data Privacy and Security:

As discussed before, one of the major concerns with automated code reviews is data privacy. However, with CodeGuru, your code remains within your AWS account. The service analyzes the code locally and sends only the aggregated findings, ensuring your code’s confidentiality.

Java Code Examples:

Refactoring:

Before:

Java

public class Order {
  private String orderId;
  private String customerName;
  private List<Item> items;

  public Order(String orderId, String customerName, List<Item> items) {
    this.orderId = orderId;
    this.customerName = customerName;
    this.items = items;
  }

  public double getTotalPrice() {
    double totalPrice = 0;
    for (Item item : items) {
      totalPrice += item.getPrice();
    }
    return totalPrice;
  }
}

After (Improved Readability):

Java

public class Order {
  private final String orderId;
  private final String customerName;
  private final List<Item> items;

  public Order(String orderId, String customerName, List<Item> items) {
    this.orderId = orderId;
    this.customerName = customerName;
    this.items = Collections.unmodifiableList(items);
  }

  public double calculateTotalPrice() {
    return items.stream().mapToDouble(Item::getPrice).sum();
  }
}

Security Vulnerability Identification:

Before (SQL Injection):

Java

String userName = request.getParameter("username");
String sql = "SELECT * FROM users WHERE username = '" + userName + "'";

Vulnerability: This code is susceptible to SQL injection attacks as user input is directly added to the SQL query without proper sanitization.

Before (Cross-Site Scripting (XSS)):

Java

String comment = request.getParameter("comment");
out.println("<p>" + comment + "</p>");

Vulnerability: This code is vulnerable to XSS attacks as user input is directly displayed on the web page without proper escaping, allowing attackers to inject malicious scripts.

Benefits for Engineering Teams:

  • Improved Code Quality: CodeGuru identifies potential issues like code smells, potential bugs, and security vulnerabilities, leading to cleaner and more maintainable code.
  • Faster Release Cycles: By automating repetitive tasks and improving review consistency, CodeGuru can significantly speed up the development process.
  • Reduced Costs: Early identification and mitigation of security vulnerabilities can prevent costly breaches and downtime.

Downsides:

  • Cost: While offering a free tier, it is not completely free.
  • Limitations: CodeGuru might not be able to detect all potential issues, and human reviews are still crucial for complex scenarios.

Alternatives:

  • SonarQube: An open-source platform offering similar functionalities to CodeGuru. We still use SonarQube for the other functionalities it offers.
  • Fortify: A commercial static application security testing (SAST) tool with advanced features.

I am sure there are many more static code analysis tools and code linters available in the market. However, CodeGuru’s integration with AWS services and its focus on security vulnerabilities make it a compelling option.

Summary:

AWS CodeGuru presents a valuable tool for enhancing code reviews and security in the age of rapid development. While not a replacement for human expertise, it can significantly improve the development process by identifying potential issues early, leading to faster releases and more secure applications.

Happy Learning!

LLMs: Your New Copilot on the Web Dev Adventure

In our previous post, we looked at how Java Developers can leverage LLMs to enhance their coding and improve their efficiency.

In this post, we will look at the possiblity of doing the same for someone doign web application development using React, NodeJs and other libraries in the ecosystem. We will try to leverage Prompt Engineering to write new code, debug existing code, refactor for efficiency, and strive to improve the overall quality of the work.

Context for this post, consider yourself as a react developer specifically focused on building a seamless checkout experience for an e-commerce web application like everShop, an Open Source GPL 3 Licensed eCommerce application. Seems to be a decent, well written application. We’ll assume that the development environment uses libraries like React, NodeJS, React DOM, React Router, React Query, and Material UI. We will use the existing code in everShop repository for the prompt engineering purpose.

Imagine you’re building the checkout functionality for this e-commerce application. Assuming your current work involves:

1. Code Generation:

  • Prompt: “Based on the existing placeOrder.js code, generate a function that takes the following arguments: cartItems, shippingAddress, billingAddress, and returns a promise that resolves to the created order object.”
  • Output: Generated code in JavaScript
const placeOrder = async (cartItems, shippingAddress, billingAddress) => {
  try {
    const response = await fetch('/api/orders', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ cartItems, shippingAddress, billingAddress }),
    });

    if (!response.ok) {
      throw new Error('Failed to place order');
    }

    const data = await response.json();
    return data.order;
  } catch (error) {
    throw error;
  }
};
  • Prompt: Write a function to validate a user’s shipping address.

“Generate a JavaScript function that takes a shipping address object as input and returns true if the address is valid, false otherwise. The function should validate the following:

  • Presence of all required fields (name, address line 1, city, state, zip code)
  • Valid format for each field (e.g., zip code must be 5 digits)”
  • Output: LLM will Generate a code snippet that you can use as a starting point for your function.
  • Prompt: Generate a React functional component that acts as a product card. This component should accept a product object as a prop and display the product’s name, price, and a button to add it to the cart. Utilize Material UI components for styling purposes.
  • Output: This prompt provides the LLM with clear context about the component’s intended purpose, including the props it should handle and the desired styling approach. The LLM will then generate code similar to:

import React from ‘react’;

import Button from ‘@mui/material/Button’;

const ProductCard = ({ product }) => {

return (

{product.name}

Price: ${product.price} addToCart(product)}> Add to Cart

);

};

export default ProductCard;

2. Code Understanding:

  • Prompt: “Explain the purpose of the addItemToCart.js file and the functions it contains.”
  • Output: “The addItemToCart.js file contains functions for adding items to the shopping cart. The primary function is likely addItemToCart, which takes a product ID and quantity as arguments and performs the necessary actions to update the cart state and potentially interact with a backend API.”

3. Debugging:

  • Prompt: “The placeOrder.js function is throwing an error ‘Failed to place order’. Analyze the code and suggest potential causes for this error.”
  • Output: “The error message suggests an issue with the API call to ‘/api/orders’. Possible causes could be a network issue, an error in the request body, or a server-side problem. Inspect the response object and consider logging additional details during the API call to narrow down the issue.”
  • Prompt: “Analyze the code in addItemToCart.js and identify any potential logical errors or edge cases that could lead to unexpected behavior.”
  • Output: The LLM would then analyze the code and highlight areas where there might be bugs, allowing you to focus your debugging efforts.

4. Refactoring:

  • Prompt: “Refactor the placeOrder.js function to improve readability and maintainability.”
  • Output: Refactored code in JavaScript
const placeOrder = async (cartItems, shippingAddress, billingAddress) => {
  const orderData = { cartItems, shippingAddress, billingAddress };

  try {
    const response = await fetch('/api/orders', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(orderData),
    });

    if (!response.ok) {
      throw new Error('Failed to place order');
    }

    return await response.json();
  } catch (error) {
    throw error;
  }
};

5. Code Improvement:

  • Prompt: “Analyze the addItemToCart.js function and suggest improvements for performance or error handling.”
  • Output: “Consider adding memoization to the addItemToCart function to avoid redundant API calls for the same product. Additionally, implement more specific error handling based on the type of error received from the API.”
  • Prompt: “Rewrite the code in placeOrder.js to improve readability and maintainability while preserving the original functionality.”
  • Output: “LLM would suggest alternative ways to write the code that is easier to understand and maintain”.

6. Break down complex problems:

When faced with a complex task, like refactoring a large codebase, you can break it down into smaller, more manageable steps using prompts. For example, you could prompt an LLM to:

“Analyze the code in placeOrder.js and identify potential areas for improvement, focusing on code readability and maintainability.”

The LLM would then analyze the code and provide suggestions for improvement, such as refactoring functions, improving variable naming, or adding comments.

Prompt engineering can be a valuable tool for web developers, helping you to:

  • Be more productive by automating repetitive tasks and generating code snippets.
  • Be more creative by exploring new ideas and approaches to problems.
  • Be more efficient by identifying and fixing bugs more quickly.

As you continue to experiment with prompt engineering, you’ll discover new ways to leverage its capabilities to improve your workflow and build better web applications.

Happy Learning!

Level Up Your Java Kung Fu with the Prompt Engineering Dragon Scroll

Our Java developers wear many hats, constantly juggling tasks like code improvement, fixing complex logic issues, troubleshooting bugs, optimizing performance, and building new features. This multitasking can be challenging, especially in fast-paced release cycles.

We cannot continue to add more team members. This may slow down instead of moving us forward.

What would be a solution to help our developers?

Enter prompt engineering, a powerful tool to streamline your workflow. By crafting specific prompts for large language models (LLMs), one can gain valuable insights and automate repetitive tasks (boiler plate code), freeing up developer time for creative problem-solving and core development activities.

Photo by SOON SANTOS on Unsplash

Let’s consider a Java developer using Java 21, Streams, Spring Boot, Spring Webflux, and AWS Aurora MySQL for API development. How can this developer benefit from prompt engineering?

This post is an attempt for the developers to take a look at LLMs and use them in the right way. If you are new to this topic refer to these guides

1. Code Enhancement:

  • Prompt: Analyze the following Java code snippet written in Java 21 utilizing Streams and identify potential areas for improvement related to efficiency and readability. Consider best practices for utilizing Streams and suggest alternative approaches if applicable.

Java

public List<Product> filterProductsByCategory(String category) {
  return productRepository.findAll()
      .stream()
      .filter(product -> product.getCategory().equals(category))
      .collect(Collectors.toList());
}

Expected Output: The LLM might suggest using a pre-built predicate for category filtering to improve readability and potentially using parallel streams for larger datasets to enhance performance.

  • Prompt: “Analyze the following Java code snippet using Spring Boot and identify opportunities for improvement in terms of readability and maintainability. Consider utilizing Java 21 features where appropriate.

Java

public class LegacyService {
  public List<Product> getProductsByCategory(String category) {
    List<Product> allProducts = productRepository.findAll();
    List<Product> filteredProducts = new ArrayList<>();
    for (Product product : allProducts) {
      if (product.getCategory().equals(category)) {
        filteredProducts.add(product);
      }
    }
    return filteredProducts;
  }
}

Expected Output: This prompt provides context (Spring Boot), code for analysis, and the desired outcome (readability and maintainability improvements with Java 21). The LLM might suggest using Java Streams to achieve a more concise and efficient solution.

2. Refactoring:

  • Prompt: The provided Java code snippet utilizes nested if statements and repetitive logic to determine user eligibility for a discount. Refactor the code using a more concise and maintainable approach, potentially leveraging Java’s switch statement or lambda expressions.

Java

public double calculateDiscount(User user) {
  if (user.getUserType().equals("Gold")) {
    return 0.1 * product.getPrice();
  } else if (user.getUserType().equals("Silver")) {
    if (user.getPurchaseCount() > 10) {
      return 0.05 * product.getPrice();
    } else {
      return 0;
    }
  } else {
    return 0;
  }
}

Expected Output: The LLM could suggest refactoring the code using a switch statement or a map with lambda expressions for each user type, improving readability and maintainability.

  • Prompt: “Refactor the following Java code utilizing Java Streams to optimize performance for large datasets. The code retrieves customer orders from an AWS Aurora MySQL database using Spring Webflux.

Java

public Mono<List<Order>> getCustomerOrders(String customerId) {
  return orderRepository.findByCustomerId(customerId)
    .collectList();
}

Expected Output: Here, we specify the context (Spring Webflux, Aurora MySQL) and the code snippet. We ask for refactoring with Java Streams to handle large datasets efficiently. The LLM might suggest using flatMap to process orders in a non-blocking manner.

3. Bug Identification:

  • Prompt: Analyze the following Spring Boot controller code and identify potential issues related to null pointer exceptions or security vulnerabilities. The code utilizes Spring Webflux for reactive programming.

Java

@RestController
public class OrderController {

  @PostMapping("/orders")
  public Mono<Order> createOrder(@RequestBody Order order) {
    return orderRepository.save(order)
        .flatMap(savedOrder -> userService.getUserById(savedOrder.getUserId())
            .flatMap(user -> {
              if (user == null) {
                throw new RuntimeException("User not found!");
              }
              return Mono.just(savedOrder);
            }));
  }
}

Expected Output: The LLM might point out the potential null pointer exception when userService.getUserById returns null and suggest using Optional or a null-safe navigation operator to handle the scenario gracefully.

  • Prompt: “Review the following code for potential bugs related to null pointer exceptions. The code interacts with a user model stored in an AWS Aurora MySQL database.

Java

public void updateUserEmail(String userId, String newEmail) {
  User user = userRepository.findById(userId).get();
  user.setEmail(newEmail);
  userRepository.save(user);
}

Expected Output: We provide context (Aurora MySQL user model) and the code. We ask the LLM to identify potential issues related to null pointer exceptions, a common bug in Java code. The LLM might suggest using Optional to handle the possibility of a user not being found.

4. Performance Optimization:

  • Prompt: Analyze the following code snippet querying data from AWS Aurora MySQL using Spring Data JPA. Suggest potential optimizations to improve query performance, considering indexing and batch processing techniques.

Java

public List<Order> findOrdersByCustomerId(Long customerId) {
  return orderRepository.findByCustomerId(customerId);
}

Expected Output: The LLM could recommend adding an index on the customerId column in the database table and suggest using batch fetching instead of individual queries for performance gains, especially when dealing with large datasets.

  • Prompt: “Analyze the following Spring Boot application and identify bottlenecks impacting performance. Suggest potential optimizations utilizing caching mechanisms.

Java

@SpringBootApplication
public class MyApplication {
  @Autowired
  private ProductService productService;
  
  @GetMapping("/products/{productId}")
  public Mono<Product> getProductById(@PathVariable String productId) {
    return productService.getProductById(productId);
  }
}

Expected Output: We provide context (Spring Boot application) and a code snippet representing an API endpoint. We ask the LLM to identify performance bottlenecks and suggest caching strategies using Spring Cache. The LLM might suggest caching frequently accessed products to improve response times.

5. Feature Development:

  • Prompt: Develop a new Spring Boot microservice that utilizes Java 21 features and integrates with AWS Aurora MySQL to manage user subscriptions. The service should offer functionalities for creating, updating, and cancelling subscriptions.
  • Expected Output: The LLM might not generate the entire microservice, but it could provide a basic skeleton with essential classes, repository interfaces, and potential API endpoints, giving you a head start on development.
  • Prompt: “Generate a Spring Webflux controller that exposes an API endpoint to accept a list of product IDs and return the corresponding product details from an AWS Aurora MySQL database. Implement error handling for invalid product IDs.

Java

@RestController

@RequestMapping(“/api/v1/products”)

public class ProductController {

@Autowired

private ProductService productService;

// Implement the API endpoint here

}

  • Explanation: This prompt provides context (Spring Webflux, Aurora MySQL), existing code structure, and the desired functionality (API endpoint with error handling). The LLM can generate the complete controller implementation.

Remember: Prompt engineering is a powerful tool, but it’s not a magic bullet. Always review and adapt the LLM’s suggestions to your specific codebase and requirements. With practice, you’ll master the art of crafting effective prompts, unlocking a new level of efficiency and innovation in your Java development.

Happy Learning!

From Manual Mayhem to LLM Zen: Conquering the Testing Beast

Continuing the “Taming the Test Tornado” series,, we’ve previously explored time management challenges and how Large Language Models (LLMs) can support testers.

In this post, we’ll shift gears and focus on leveraging LLMs to improve enterprise application testing. To illustrate, let’s explore how LLMs can enhance testing for Salesforce CRM. We’ll delve into testing the “Lead Intelligence View” feature, which aggregates data from various sources to provide a comprehensive view of potential leads (public documentation available here).

For any test engineer, continuously improving test coverage and identifying edge cases is crucial. This post will delve into how LLMs can be an invaluable asset in achieving these goals.

Photo by Andrea De Santis on Unsplash

Prompts to Turbocharge Your Testing with LLMs:

Generate Edge Case Scenarios: “Create 10 test cases for Lead Intelligence View that simulate unusual combinations of data fields, such as missing contact details or conflicting company information.”

    • Do: Be specific about the desired outcome. This prompt clearly outlines the number and type of test cases needed. Don’t: Be vague. A prompt like “Generate interesting test cases” might not yield the desired results.

    Identify Boundary Value Analysis (BVA) Tests: “Suggest test cases for Lead Intelligence View that explore the maximum and minimum character limits for various data fields, such as company name and website URL.”

    • Do: Specify the functionality and data elements relevant to BVA testing. Don’t: Omit details about the specific application and testing methodology.

      Craft Negative Test Cases: “Formulate 15 test cases for Lead Intelligence View that intentionally provide invalid data, like nonsensical characters in email addresses or unrealistic phone numbers, and verify expected error messages or behaviors.”

      • Do: Emphasize the nature of the invalid data and the desired outcome (e.g., error message). Don’t: Simply ask for “negative test cases” without providing context or expected behavior.

        Simulate User Behavior: “Describe 20 test cases for Lead Intelligence View that mimic diverse user interactions, such as filtering leads based on various criteria, exporting data to different formats, and navigating through the interface with screen reader accessibility features enabled.”

          • Do: Enumerate specific user actions and functionalities to be tested. Don’t: Provide a generic prompt like “test user interactions” without details.

          Uncover Hidden Functionality: “Based on publicly available information about Lead Intelligence View, propose 5 test cases that explore possible undocumented features or functionalities not explicitly mentioned in the official documentation.”

            • Do: Specify the source of information (e.g., public documentation) and the desired scope of the exploration. Don’t: Ask LLMs to guess undocumented features without providing any context.

            Brainstorm Edge Cases: Prompt: “List 10 unconventional scenarios where a user might interact with the Lead Intelligence View in Salesforce, considering fringe data, user roles, and system configurations.”

            This prompt encourages the LLM to think outside the box, generating edge cases you might have missed.

            Identify Risky Interactions: Prompt: “For the Lead Intelligence View in Salesforce, identify 5 potential user interactions that could lead to security vulnerabilities or data inconsistencies.”

            This taps into the LLM’s ability to understand complex systems and identify potential risks.

              Simulate the user journey of a Salesforce user exploring Lead Intelligence View. Identify potential user interactions, decisions, and error points. Generate test cases covering these interactions. Write detailed test cases for the scenarios. Focus on the positive and negative test cases. Include atleast 10 cases for each

              Remember: LLMs are not magic wands, but rather powerful tools waiting to be wielded by skilled testers. So, get creative with your prompts, leverage the LLM’s potential, and watch your test cases soar to new heights!

              By incorporating LLMs into your testing strategy, you can:

              • Increase test coverage: Generate a wider variety of test cases, including unexpected scenarios and edge cases.
              • Boost efficiency: Save time by leveraging LLMs to automate repetitive tasks like brainstorming test ideas.
              • Enhance quality: Identify potential issues and defects that might be missed by traditional methods.

              This is just a glimpse into the exciting potential of LLMs for test case creation. As these models continue to evolve, their ability to support software quality assurance will undoubtedly grow even stronger.

              Happy Learning!