I've been seeing posts lately about Developer value, productivity and refactoring versus "fix the bug and move on with your life." I thought I'd give it a ramble, just for grins. One post illustrated at their shop each dev averages 1.48 releases to production every day. (I don't remember the exact number it was > 1 and a max of like 2.5 but it was an EXACT number, which is impressive). The post was actually about Push and Pray versus Staging. However, what caught my attention was the highlighting of a number of pushes on average versus what did those releases do to A) Increase Revenue, and by how much. B) Increase customer/user happiness and by how much. C) Increase Profits, and by how much. I've always held to the notion that Developer time is gold. Pure gold. Not only do we command relatively large salaries, but opportunity costs are astronomical in some cases. Developer time wasted is too big of a risk. Each task given to a developer should be measured in A, B or C above. If those metrics, revenue, profit and customer satisfaction can't match what one would pay to have the ticket completed, then you should enter tickets that DO. And focus on tasks that will exceed in value what you pay in development costs. Not only is that sustainable business but it is also better for Developer morale, happiness and productivity. We love to hear about how we helped the business, in those terms (A, B, C). Not so much we completed X amount of busy work last quarter. Another sort of theme I've seen by the magic of social media feed AI is the age old debate of refactoring versus patching. I wanted to write down some stuff about that versus A, B and C above. I think if your codebase has significant design flaws that are affecting those it should be evaluated as a ticket. Negative impacts can be accounted in the same way. Prohibiting growth can also be accounted against development costs. Most of the time if it's not broken, we don't fix it. And SHOULDN'T fix it, because revenue, profit and customer satisfaction are not impacted at all. But sometimes those issues DO affect those things significantly. My recent case is I have a code base I wrote completely myself, years ago, and it was fairly unfinished. It involved a manual 30–60-minute process of collecting data, sorting data, massaging data, pasting data, 3x each, etc. Not a huge deal, easily automated yet if you have to do that once or twice per quarter, you can't justify hiring a dev to automate it. Until you forget to do it. And the "business" loses money. It wasn't broken, I didn't fix it. Because fixing it meant refactoring the entire data model, switching the entire data source, introducing a 3rd party tool (which actually became 2 3rd party tools) and all the joy that comes with that sort of thing. Until the "business" lost enough money to where the 12-18 story points required to complete that work got backlogged and the "dev team" picked it up. Dev Time is gold. Use it wisely.
William Dickey’s Post
More Relevant Posts
-
Platform Engineering’s Most Critical First Decision https://lnkd.in/gfTxqiYr Building a platform engineering platform for your company is a big task, with lots of critical decisions that must be made. But perhaps the most important decision that must be tackled first is deciding where to start building the platform — from the frontend or from its backend. Why is this so critical? Because for a platform engineering platform to work well and be successful, it needs to be built around a well-designed backend utilizing solid business logic that allows it to best serve the developers who will use it, Luca Galante, a core contributor to the global developer’s community PlatformEngineering.org, told The New Stack. By starting with the backend and that critical business logic, the platform can then be used with any kind of graphical user interface (GUI), a code-based interface, or with a command line interface (CLI), he said. “What you want is a very solid core as a backend and then you can plug and play different interfaces for different users and for different levels of abstractions that you want to provide to them,” he said. “You cannot really build business logic into a frontend. The frontend is just designed to visualize stuff, to give you a nice developer experience. It is not designed to let you define how to how developers interact with the underlying infrastructure, or how they configure things in detail. It does not let you layer on top a role-based access control.” And that business logic is important because it creates that powerful foundation and code for everything that will follow as the platform — which is also called an internal developer platform or IDP — is engineered and built, he said. “If you start with the portal first, you do not have any of that flexibility, because the developer experience is constrained,” said Galante, who is also the vice president of product and growth for platform engineering vendor, Humanitec. “It needs to be the same across different teams, across different workflows, which does not scale in the enterprise and puts you back to square one to rethink this from the ground up.” These are not new concepts, he added. “Building a platform is like building any other application,” said Galante. “Nobody builds applications frontend first. That is just not practice.” Origins of the ‘Frontend vs. Backend First’ Argument So, with all these sensible arguments for starting with the backend due to its robust built-in business logic, why do some nascent platform engineering teams continue to try designing their company’s IDPs beginning with the frontend in the first place? In those cases, said Galante, it is often due to motivations from the platform’s administrative team to build something that will instantly show some kind of early success to company executives who have mandated the creation of a platform engineering infrastructure and provided the funding. “I think that the majority of them...
To view or add a comment, sign in
-
New Post: Automate repetitive tasks with the Pinion code generator - https://lnkd.in/gXaXxgZ8 - When working with certain design patterns, your project will almost always follow a specific structure. Manually setting up these projects can be daunting and tedious. You can save yourself time by using code generators for these repetitive tasks. Code generators are tools you can use to automate repetitive tasks such as generating boilerplate code. They increase productivity by freeing up time so that you can focus on more productive areas of your project. A code generator can generate configuration files, create directories, and set up necessary tooling. These tools include linters, testing frameworks, compilers, and more. There are several code generation tools in the JavaScript ecosystem, including Pinion, Yeoman, Plop, and Hygen. In this article, we will focus on the Pinion code generator toolkit and how to use it to automate repetitive tasks. What is a code generator? Generators remove low-level, tedious, repetitive tasks such as creating directories and setting up development tools like linters, testing frameworks, compilers, and transpilers so that you focus on more productive activities like building and shipping products. Code generators also ensure you follow best practices and consistent style and structure in your project, making your project easy to maintain. Code generators typically depend on user preferences to ensure that the generated code aligns with the developer’s needs. These preferences can be specified via the command line. There are several third-party code generators developed using tools such as Yeoman. However, there is no one-size-fits-all code generator. The existing code generators may not meet your project requirements, coding style, or licensing requirements, especially when building enterprise projects. You also need to take into account the long-term maintenance and potential security issues such packages may pose. To meet your project requirements, you might need to develop your own generator using tools like Yeoman, Hygen, Plop, or Pinion. In this article, we will focus on Pinion. An introduction to Pinion Pinion is a free, open source, MIT-licensed toolkit for creating code generators. It is a lightweight Node.js package. Though you need to use TypeScript to create code generators with Pinion, you can use Pinion in any project — including those that don’t use Node.js or JavaScript. If you want to use Pinion in a non-JavaScript or non-TypeScript project, you will need to first install the supported Node.js version and initialize the project using the npm init --yes command before installing Pinion. Pinion is type-safe, fast, composable, and has a minimal API. Therefore, you can learn it fairly quickly. But unlike other similar tools like Yeoman, Plop, and Hygen, Pinion is relatively new. Using the Pinion code generator toolkit In this section, you will learn how to use Pinion to automate re
To view or add a comment, sign in
-
How to build a Full Stack Product from Scratch? Backend Development: Foundation: Start with Node.js and Express for the backend, offering a solid foundation for building efficient and scalable APIs. Authentication and Authorization: Implement secure user access using JWT (JSON Web Tokens), ensuring data privacy and seamless user interactions. Abstract Base Model: Create a robust abstract base model that serves as a blueprint for your database models, promoting consistency and reducing code duplication. Notifications Service: Elevate user engagement through a comprehensive notifications system: Push Notifications with Firebase Cloud Messaging (FCM) Error Logging: Utilize Sentry or Rollbar for efficient error tracking and debugging, ensuring a smooth user experience. Logging and Monitoring: Set up the ELK (Elasticsearch, Logstash, Kibana) stack to centralize logging and monitor application health. Throttling and Rate Limiting: Incorporate mechanisms to prevent DoS and DDoS attacks, safeguarding your application's availability. Asynchronous Communication: Implement RabbitMQ for asynchronous communication, enhancing data flow and system reliability. Cron Jobs: Automate tasks with Cronitor or Celery Beat, streamlining maintenance and freeing up resources. Secrets Management: Prioritize security using HashiCorp Vault to manage sensitive information effectively. Frontend Development: Framework and Language: Opt for React, a powerful JavaScript library, for creating dynamic and engaging user interfaces. Responsive Design: Ensure your application adapts seamlessly to various screen sizes by embracing responsive design principles. State Management: Utilize Redux for efficient state management, ensuring consistent data flow across components. Routing: Implement React Router for smooth navigation and dynamic content loading. UI Design and Component Library: Collaborate closely with designers and you can use component libraries like Material-UI for a polished and consistent UI. Form Handling: Simplify form creation and validation with Formik, enhancing the user experience during data input. Testing and Performance: Embrace testing tools like Jest. Full Stack: API Integration: Seamlessly connect frontend and backend using RESTful APIs or GraphQL. CI/CD Integration: Automate testing and deployment with CI/ CD pipelines like GitLab CI, ensuring code quality and rapid delivery. Version Control: Use Git for version control to maintain a collaborative development process and track changes. Monitoring and Analytics: Employ New Relic for application performance monitoring and gain valuable insights. User Experience and Accessibility: Prioritize UX and accessibility by following best practices, making your application user-friendly for all. Security and Firewall: Strengthen your production environment with Nginx, ensuring a secure credit Sahil Chopra
To view or add a comment, sign in
-
The million-dollar product dilemma: Too simple to be useful, or too useful to be simple? 🤔 As most products try to please more people, they add more features. These features then lead to more buttons, more tutorials, more tooltips, and, consequently, more complexity. Still, that's not a natural law. It's not as if "software was good and more features corrupted it". It's definitely possible to create software that's both simple and flexible - it's just really difficult. It's way easier to just add a bunch of buttons to yet another screen, rather than to rethink your abstractions so that these many buttons are not needed in the first place. At Briefer (YC S23), we've had to do a lot of that. So here are four simple principles that helped us design software that's powerful enough for data people to use, but easy enough to understand for non-technical stakeholders to understand: 1. Different personas view different UIs. There's no point in showing a bunch of code blocks and buttons to non-technical stakeholders, for example. These people want to see results, tables, and graphs, so we give them that. 2. Show buttons and options only when they're relevant. If you're writing code within a Python block, you don't need to see the control buttons for all the other blocks around it. Similarly, if you've just logged in and don't have any data sources connected, there's not much point in showing you anything besides the "add data sources" button. 3. The more you use it, the more prominent it should be. Users don't need to see their environment's details or settings all the time, but they do need to run code, write queries, and do visualizations very often. That's why the buttons to do all these things are very prominent and easy to find, while the environment settings are a small button on the bottom left of your editor. 4. Always ask yourself: "Can we rethink our abstractions so we don't need more UI?" As engineers, we know that the fastest piece of code is that which has never been written, and we apply that same principle to our UI. Whenever we can rethink the way our software works and avoid adding yet another dropdown menu, button, or configuration setting, we do it. (by the way, this approach is called "cutting the Gordian knot" - its Wikipedia page is great) Anyway, I hope you found these four principles useful, no matter your role. It's everyone's job to fight complexity as they add features to their product.
To view or add a comment, sign in
-
It’s time to start building. It clicked for me today—creating software is becoming as simple as writing a tweet. And it might click for you too. Let me break it down. Not long ago, creating something online felt exclusive—reserved for those who could navigate the complexities of HTML, servers, and raw code. The internet was controlled by gatekeepers: developers, IT admins, and coders. Then came the rise of social platforms. Tools like Facebook, Instagram, and YouTube democratized content creation. They lowered the barrier to entry, and suddenly, everyone could be a creator. The power shifted. The same transformation is unfolding in software today—but it’s even more disruptive. AI isn’t just making software easier to build; it’s flipping the whole process upside down. Think of how photography evolved: 2005: If you wanted to showcase your work, you had to code a portfolio site, manage servers, learn Photoshop, and deal with domain hosting. 2015: Instagram simplified everything—you just needed an account, a phone, and an eye for filters. 2024: We’re approaching a world where you simply hit ‘generate’ on an AI tool like Midjourney, and a professional-level image is instantly ready. Now, the same leap is happening with software development. For years, it was a field built on technical mastery: 2015: The narrative was “Learn to code or die”—technical skills were seen as the ultimate competitive edge. 2020: No-code platforms like Bubble emerged, promising that “No-code will save us” by letting non-developers build software. 2024: We’re entering a new phase: “Just tell AI what you want.” The very definition of what it means to build software is being rewritten. And here’s what’s critical—this shift is happening faster than anyone expected. Just a couple weeks back, Claude launched an API that lets AI agents use computers just like humans do. Now, AI can control entire systems on our behalf, navigating interfaces and performing tasks that used to require human intervention. The real winners won’t be the best coders or even the most skilled AI prompt engineers. They'll be regular people like you who: ✴ Deeply understand real-world problems that need solving ✴ Know how to communicate the pain point in a simple yet beautiful way ✴ Rapidly iterate based on feedback without ego ✴ Build distribution/audience/community first, then layer on the product You're probably seeing it already: 👨💻 Solo founders are building fully functional AI agents over a weekend. 👨💻 Non-technical makers are launching SaaS products on X 👨💻 Practitioners are translating their expertise into automated workflows, turning ideas into software with little-to-no software knowledge and waking up to new MRR We’re witnessing the true democratization of software. Every maker is becoming a founder, and every idea has the potential to become an app or business. The gatekeepers are gone. The tools are in your hands. And it's kinda magical.
To view or add a comment, sign in
-
More founder stories! As I promised last time, we have an amazing lesson/advice to talk about. We were at the beginning of Airbyte as people know it today. We were coming out of a long journey through the desert, and building something was really itching us! Figma mockups and landing pages were good, but we were hitting a plateau in learning. We decided to change our strategy. Instead of exploring more problem spaces, we would settle on a possible solution and we would spend one month building an MVP. In the meantime, we would continue to have conversations with our identified audience to get them to use what we were building and to give us early feedback. At the end of the month, we would decide if we wanted to continue down that path or prototype something else. The first idea we started working on was an Open-Source Data Integration platform (rings a bell?). It was really our first time starting an Open-Source project from scratch. We’d all been users and contributors during our careers, but never maintainers. To learn more about it, we started talking to successful Open-Source CEOs. And there was one call that changed how we built Airbyte. John and I talked with Sid Sijbrandij (CEO at Gitlab), and we described our idea to build an Open-Source product. He gave us two pieces of advice. The first was to do everything we could to reduce the time to value for people experimenting with the product. Scale didn’t matter. Our focus should be on getting to the “WOW” effect as fast as possible. This is advice we’ve followed ever since. The second was to accept that we would never know who was using our product. For two people working in data, this was hard to swallow, but we decided to do everything possible to ensure this would not be the case 🙂. I will talk more about that in a future post! Our month of August was only about calls and code! We had absolutely no problem getting in touch with engineers, data engineers, and VPs of data. They were very open about sharing all the pain they were experiencing bringing data into their infrastructure. Many of them had been changing how they approached data internally by adopting modern warehouses. They had a lot to build and were very frustrated with all the existing out-of-the-box solutions: unreliable, not extensible, and more expensive than their warehouses... Almost all of them used two systems, a cloud-based solution and an in-house custom-built solution. It was a real nightmare in terms of maintenance and what we were building seemed like a Silver Bullet to them! As we approached the end of August, we checked on our progress and decided that given all the green signals, we should go for one more month. And that was how the Airbyte project began! We never checked in again on “should we continue or not”! Next time I will talk about how we released the first version, and how we did our fundraising at the end of 2020. #airbyte #entrepreneurship #founderjourney
To view or add a comment, sign in
-
🚀 Dependency Graphs in BackEnd Web Development: A Game Changer 🚀 Dependency graphs are crucial for visualizing and managing complex backend web projects. They provide a clear picture of interdependencies, enabling efficient development and maintenance. 🤔 What Problem Do They Solve? Improved Code Understanding: Visualize how different components interact, making codebases easier to grasp. Imagine understanding the intricate flow of data between APIs, databases, and services. Enhanced Collaboration: Teams can easily share and understand the project's structure, fostering better communication and reducing errors. Example: A marketing team can see how their campaign data flows through the system. Simplified Debugging: Quickly pinpoint the source of errors by tracing the flow of data through the system. Example: Identify a database query causing a performance bottleneck. Efficient Testing: Design and execute tests targeting specific components and their dependencies. Example: Test the user authentication flow without affecting other parts of the system. Effective Scaling: Visualize how the system scales and identify potential bottlenecks. Example: Analyze the impact of adding new features on existing components. 📈 Key Benefits: Reduced Development Time: Faster identification and resolution of issues. Improved Code Quality: Clearer understanding of dependencies leads to more robust code. Enhanced Maintainability: Easier to update and modify existing code. Increased Team Productivity: Improved communication and collaboration. 🛠️ Tools & Frameworks: Graph Databases: Neo4j, Amazon Neptune Visualization Libraries: D3.js, Graphviz Project Management Tools: Jira, Asana (can integrate with visualization tools) 💡 Use Cases: Microservices Architecture: Visualize the interactions between different microservices. API Design: Map the dependencies between different APIs. Complex Data Pipelines: Visualize the flow of data through various stages. By leveraging dependency graphs, you can significantly improve the efficiency and maintainability of your backend web development projects. #BackendDevelopment #DependencyGraph #SoftwareEngineering #WebDevelopment #DevOps
To view or add a comment, sign in
-
Do you have a system design that needs nice looking, well working and customizable diagramming feature, you need to consider react-flow, read article from our frontend developer Uladzislau Plakhotnik who mastered this library. Library has 24.1K stars on github and still growing https://lnkd.in/dg2H8T_C, we are using this library in our projects for 4 years already and it is amazing to see how the project is growing. Library can be adapted to any problem domain so amount of possible use cases is endless in my opinion. #BPMN #ERD #systemdesign
If you are developing or designing a system that needs diagrams, consider using #react and #reactflow https://lnkd.in/dxgvCgEZ. React-flow is open on extensions and customization so you can achieve any desired effect, it has huge community on discord where you can find skilled developers and help and is used by well know companies like Typeform.
React Flow — modern solution to create diagrams in React
medium.com
To view or add a comment, sign in
-
TestCafe Overview: TestCafe is a powerful tool for automating test cases for web applications. It allows you to perform end-to-end testing with ease. Best Practices for Writing Automation Code with TestCafe: Code Structure: - Follow Page Object Model (POM): It helps in avoid writing the duplicate code, making your tests easier to maintain and update in the future. - Use Metadata: Apply metadata for different testing scopes, such as sanity and regression, to organize and manage your tests effectively. - Run Tests Concurrently: Control test execution at the fixture level, deciding whether to run tests in parallel or serially across multiple browser instances. Use the `disableConcurrency` option as needed. - Leverage Built-in Wait Mechanisms: Use TestCafe's built-in wait mechanisms smartly to ensure elements are visible or in the desired state before proceeding with test steps. Generating Reports: - Allure Reports: Utilize Allure reports, which provide detailed step-by-step information. Ensure your code uses step definitions to enhance report clarity. For this, use the `testcafe-reporter-allure-plus` package. - Single HTML Report: Convert your reports into a single HTML file and share it with stakeholders. Running Tests: - CI/CD Pipeline: Integrate your tests with a GitHub CI/CD pipeline to automate the execution and store the output in artifacts. Alternatively, write a script to upload the results to a different location. - Docker Integration: Use TestCafe's internal Docker image. Configure your test environment according to your needs via a Dockerfile. - EC2 and Nginx: Execute tests on an EC2 machine and use Nginx to serve the reports via a URL. - ECS Service: Run your tests using AWS ECS service and upload the output to an S3 bucket.
To view or add a comment, sign in
-
🌐 Exploring the Strategy Design Pattern in Software Development 🌐 📖 Real-Life Example: TextProcessor Imagine we're building a text processing app capable of outputting text in various formats: plain text, HTML, and Markdown. Instead of intertwining these formats in a tangled web of code, we use the Strategy Pattern to neatly encapsulate them into separate, interchangeable strategies. This approach makes our TextProcessor adaptable and easily extendable, a perfect example of design pattern prowess! 💻✨ 🔗 Check out the project here: https://lnkd.in/dnhw4kN3 🔍 Widespread Use Cases: E-Commerce Dynamic Pricing: Adapt pricing strategies effortlessly based on changing criteria like customer loyalty, seasonal demands, or promotional events. Navigation Apps: Seamlessly switch between routing algorithms (shortest, fastest, eco-friendly) based on user preference or current traffic conditions. Data Compression Tools: Select from a variety of compression algorithms tailored to the data type or required compression efficiency. Machine Learning Workflows: Dynamically change ML algorithms for model training depending on data characteristics or specific accuracy needs. ✨ Pros of the Strategy Pattern: Enhanced Flexibility: Easily interchangeable algorithms or processes without changing the client code. Scalability and Maintenance: New strategies can be introduced without altering existing code, in line with the Open/Closed Principle. Decoupling: Separates the implementation details from the client, reducing dependencies. Independent Testing: Each strategy can be tested in isolation, promoting more reliable and easier testing processes. 🚧 Cons of the Strategy Pattern: Class Proliferation: Can lead to an increase in the number of classes, especially with many strategies. Potential Overhead: Additional complexity might be introduced, which could be overkill for simple scenarios. Client Responsibility: Clients need to understand the available strategies to make informed choices. 🔥 As software developers, mastering design patterns like the Strategy Pattern is crucial in enhancing code quality and future-proofing our applications. #SoftwareDevelopment #StrategyPattern #DesignPatterns #ProgrammingTips #CodeQuality
To view or add a comment, sign in