Refactoring filters for enhanced front-end performance at Navee
In this in-depth exploration, we delve into the critical refactoring of filter technology within our software architecture at Navee, a pivotal step towards enhancing our brand protection capabilities. I will guide you through the challenges we faced, the innovative solutions we implemented, and the strategic reasoning behind our decisions. Additionally, I'll discuss the necessity of this refactoring and outline our vision for an ideal code structure that supports advanced anti-counterfeit measures.
Upon joining Navee, I quickly identified a key challenge—the absence of a dedicated and highly skilled front-end leader. This deficiency led to decentralized coding practices where developers autonomously crafted features, wrote code, and addressed bugs. While this allowed for rapid feature deployment, it inevitably led to increasing technical debt, decreasing development velocity, and escalating maintenance costs—factors that can compromise the effectiveness of anti-counterfeit technologies. This scenario often gave rise to what is colloquially known as "smell code"—code that, while functional, is fraught with inefficiencies and complexities, leading to demotivation and pessimism among the team.
Confronted with this reality, I recognized the urgent need to overhaul our approach to not only enhance the coding experience but also to strengthen our product’s capacity for brand protection. This article details how strategic refactoring is integral not just to improving our software but crucial for delivering superior, scalable anti-counterfeit solutions that meet the evolving needs of brands looking to protect their assets.
Streamlining data filtering: a closer look at our refactoring journey
As we dive deeper into the specifics of the refactoring process at Navee, I will detail the steps taken to overhaul the code associated with data filtering in our application's feed view. Our platform features several distinct feed pages, each equipped with its own unique set of filters. For instance, while the first page utilizes filters A, B, and C, the second page makes use of filters B, D, and F. At first glance, filters may seem like straightforward features to implement. However, within our application, the reality is more complex due to the varying combinations of filters across different pages, each contributing its own functionality.
Previously, our system’s architecture was designed around a centralized state storage for filters. This meant that all feed pages accessed and interacted with this single state repository to fetch and manipulate their respective filter states. This approach, while seemingly efficient, presented several challenges, particularly in maintaining filter integrity across multiple pages without causing unintended interactions or state overlaps. In the following sections, I will explore the ramifications of this architecture and how we addressed them to achieve a more robust and scalable filtering solution.
This architecture led to these errors:
- Changing filter A on page one also led to changes in the same filter on page two.
- Setting a value for filter B on page one applied it on page two, even if it wasn't displayed there.
These are particularly memorable and significant errors to date, but once I received the task to fix the first error, I realized that this was a crucial moment to refactor this functionality.
Regarding the codebase, here's what I faced:
- Approximately 10 filter components, each with different responsibilities and non-standardized methods for changing filter values. Some components dispatched a single action, while others dispatched several actions to change their state.
- Each component file contained all constants for itself. Some constants were in specialized files labeled “constants.”
- We had one component that determined which filters should be displayed based on the current route read from the current URL path.
- Some callbacks were placed inline, and some had their own dedicated function. This arrangement did not follow any logical pattern (such as the number of lines in the function or other considerations).
- Some callbacks contained mapping inside, while other functions used a separate “mapping function.”
- Sometimes, inline callback functions included requests to the backend, try-catch statements, and error handling.
- Prop forwarding was a common practice. Context usage was notably absent.
- For backend requests, we had a “god mapping parameters function” that decided how parameters should be transformed based on their keys. For example, if the parameter had the key “account labels,” it would concatenate an “account_” prefix to the value. This function used the “isPageA” key to determine how parameters should be transformed. In one instance, there was an additional “transform parameters function.”
- Before sending a query to the backend, we always compared the current filter state with a “default” state to determine what filters had changed.
These issues are just the first that come to mind. From my perspective, the term “inconsistency” aptly describes the overall approach to software development previously in place.
Dealing with this code is disheartening, and it’s messy. But I’m here to make a difference, and I can do something about it. So, shall we get started?
Refining our approach: the refactoring strategy at Navee
When I began the refactoring process, it felt overwhelming—the issues and the potential solutions were jumbled in my mind. I knew that the existing setup was flawed and that a streamlined, simplified approach was essential. After defining the core areas in need of overhaul, I embarked on the refactoring journey.
I chose to refactor the existing code rather than starting from scratch for two main reasons:
- Selective Refactoring: This approach allowed us to update parts of the application incrementally, ensuring compatibility between the refactored and legacy sections. This method maintained system integrity, allowing the application to function smoothly during the transition.
- Strategic Insights: By refactoring one section at a time, we could identify which parts of the application needed attention next, helping prioritize our efforts effectively.
The refactoring process was structured around three main milestones:
- Prepare Code for Refactoring: This initial stage involved a comprehensive review of all components in the Redux store, including more than 60 actions and reducers. Our goal was to understand and clean the existing structure thoroughly.
- Transition to New Architecture: We shifted from the old architectural framework to a cleaner, more modern setup that would support easier maintenance and scalability.
- Bug Fixing and Reconciliation: The final step involved resolving any inconsistencies and bugs between the old and new systems, ensuring the application's stability and functionality.
After three weeks of diligent work by my colleague on the first milestone, we achieved a significantly cleaner and more manageable codebase within the old architectural framework.
Implementing a future-proof architecture for filters
Moving forward with the new architecture for filters, I had a clear vision of what I wanted to achieve:
- Ease of Implementation and Maintenance: The new architecture needed to simplify the addition and maintenance of filters, allowing developers to integrate new filters effortlessly—akin to a copy-paste process.
- Adherence to MVC Architecture: Ensuring the model-view-controller (MVC) pattern was followed to enhance scalability and maintainability.
- Use of TypeScript: Implementing TypeScript to leverage its powerful type-checking and scalability features.
- Separation of Business Logic: Business logic associated with filters needed to be decoupled from Redux reducers, fostering better separation of concerns.
- Simplification of Interfaces: All interfaces should be clean, simple, and reusable, facilitating easier integration and future development.
With these goals in mind, I set out to refactor the system to be intuitive, easy to navigate, and robust enough to handle all the edge cases inherent in our business processes.
In crafting the new architecture, I focused on applying the atomic design principles not just to UI components but to the types and entities within the code. This method allowed us to build a modular, flexible system where each component, or "atom," was self-contained yet fully integrable within the larger "molecule" of our application framework.
The envisioned structure promised a transformative impact on how we build and evolve our application, aligning with Navee’s commitment to innovation and quality in brand protection.
We have some Atom filters, each of which knows everything about itself:
- State
- Label
- Value
- UI component
- All functions to update its value
- How to read its value from a search query or from a saved filter.
We should also have a component for managing the state of several atomic filters, our filter molecule.
The filter molecule should know:
- Which atoms it has.
- How to initialize the value of filters.
- How to update values in case the values of two filters depend on each other.
- How to call functions to initialize values of filters from a query string or from a “saved value”.
If we look a bit wider at how our components interact with this model, we will see this model:
This is the principal scheme of the code after filter refactoring. Let’s start by describing how it works: Our feed filters molecule knows which filters it has.
The feed molecule knows how to interact with them.
Then we have a Redux store that contains our feed molecule. Redux allows components to read a molecule's value from itself and simultaneously knows how to update a molecule's value.
Then we have a special component that knows which filter molecule it should render, how to read molecules from the Redux store, and which dispatch should be called to update the molecule's value. It also draws filter components and reads additional values from Redux that are required by the filter components.
On the other hand, this component knows how to read all filters from the molecule and how to render them.
Thus, draw the filter component, read all values from the Redux store, prepare it for rendering, and then render. Provide an update filter method for all atomic filter components.
At the same time, our page component initiates sagas for fetching data for filters.
After fetching all the data, the saga sets all data in the Redux store.
In this atom-molecule architecture, we have an ideal place to encapsulate all business logic. At the atom level, each atom manages its specific logic. For example, when we trigger a read of the filter state from the search bar, each filter knows how to interpret the values from the query.
For implementing cross-filter business logic, we utilize molecules. When updating an atom filter value, the filter molecule checks what the change is, determines if the change should propagate within the molecule, and updates several filter components simultaneously.
To make this system functional, it's necessary to define all interfaces for interactions. This task is straightforward, especially since we have already defined all atoms and understand the responsibilities of each atom and code block.
Reflecting on our refactoring journey and its impact on anti-counterfeiting efforts
As we conclude this deep dive into our refactoring process, it’s clear that we’ve achieved two significant goals. Firstly, we have established a robust structure that supports the seamless integration and functionality of business features with filters. This system is designed to serve our needs well into the future, significantly reducing the time and cost associated with maintaining and upgrading our application.
Secondly, this refactoring journey has bolstered our anti-counterfeiting capabilities. By streamlining our filters and enhancing their precision, we've improved our ability to detect and mitigate counterfeit activities, protecting our clients more effectively than ever before. This not only aligns with our technical objectives but also reinforces our commitment to fighting counterfeiting—a core mission at Navee.
I wrote this article to highlight the importance of developers taking ownership of their code, from inception through to deployment. It is crucial for developers to routinely verify their fixes and features. This proactive approach does not just preempt potential issues but also drives cost-efficiency, saving the company valuable resources. Moreover, by engaging deeply with their work, developers gain a better understanding of the product, contributing to more thoughtful and impactful innovations.
If this article resonated with you or sparked any thoughts, I encourage you to connect with us on LinkedIn and share your feedback.
About the Contributor
Semyon Evstigneev is the Lead Frontend Engineer at Navee, a company specializing in advanced e-commerce and anti-counterfeit solutions. With a strong background in software development and a keen focus on front-end architecture, Semyon has played a pivotal role in driving technical innovation at Navee.
Semyon Evstigneev