Monday, August 4, 2025
Improved Comment Accuracy
Simple Commenter has received a major behind-the-scenes update designed to make feedback more accurate and reliable than ever before.
A common challenge with website feedback is ensuring comments remain attached to the correct element as a site evolves. A comment left on a header <h1> could previously become missing if the location of the <h1> tag changes in the DOM. This update addresses that challenge head-on.
The element-finding logic within Simple Commenter is now dramatically smarter, ensuring feedback stays better anchored, no matter the changes.
How It Works: More Context, Better Accuracy
Think of Simple Commenter as trying to find a specific office in a large building. Previously, the system used a very rigid set of directions, like a blueprint location:
"Start at the main entrance, go to the 2nd hallway, then enter the 5th door on the left."
This was fast and precise, as long as the building layout never changed.
However, if a developer renovated the building and added a new office in that hallway, the "5th door on the left" would suddenly point to the wrong room. This is precisely what happens on websites—a small design change could make the old directions lead to the wrong element, causing your comment to appear misplaced.
Now, the system is much smarter. Instead of just following a rigid path, it also has a detailed description of the office itself: "I'm looking for the office with 'Sales Department' written on the door, inside the main 'Executive Wing'." This 'detailed description' is what we refer to as an element's 'fingerprint' – a unique identifier created by combining various attributes of the element itself and its immediate parent.
This powerful new approach means even if the building layout changes, Simple Commenter can now intelligently scan the area to find the element that perfectly matches its unique fingerprint. The code snippet below illustrates a simplified version of our getElementFingerprint function, which is responsible for collecting these crucial data points to form each element's unique identifier.

Now, the system conducts a full investigation, collecting a much richer set of contextual clues for every single comment. This includes:
- The Content: The text on an element remains a vital clue. The system is now better at handling minor text edits, such as changing "Step-by-step" to "Step by step."
- The Structure: The element's specific type and styling are now recorded. A main heading is treated differently from a small paragraph or a special kind of button, helping to distinguish it from other elements.
- The Location: This is the most significant upgrade. The system now understands an element's "neighborhood" on the page. A "Products" link in the main navigation menu is now treated as completely distinct from a "Products" link in the site footer. This contextual awareness is crucial for accuracy.
- The Identity: For developers, the system now recognizes specific IDs and other attributes used to mark important elements, using them as rock-solid anchor points.
What This Means For Users
- Unmatched Reliability: Comments will now stay locked to the correct element, even if the page layout is changed or an element is updated from a paragraph to a heading.
- Fewer "Lost" Comments: The system is now much better at re-discovering elements that are temporarily hidden inside dropdown menus or pop-ups. It will patiently wait for them to reappear instead of making an incorrect guess.
- Total Confidence: Users can leave feedback with the confidence that it’s anchored precisely where intended, providing the clearest possible direction for development and design teams.
This advanced element detection is now active on all projects. It represents a significant step forward in making Simple Commenter the most intuitive and reliable feedback tool available.
Beyond the technical release notes
Developing this new element-finding logic truly pushed me outside my comfort zone, forcing me to dive deep into areas of coding I hadn't extensively explored before.
One of the most significant things I learned and implemented during this process was the Levenshtein distance algorithm, also known as edit distance. In simple terms, it's a metric for measuring the difference between two sequences (in our case, two strings of text). It quantifies the minimum number of single-character edits (insertions, deletions, or substitutions) required to change one word or phrase into the other.

For Simple Commenter, this means if a client comments on an element with the text "Step-by-step Guide," and later that text is changed to "Step by step Guide" (just a space added, a minor edit), the Levenshtein algorithm helps our system understand that these two texts are very, very similar, rather than treating them as completely different. This allows the comment to remain accurately attached to the correct element, even after small textual refinements on the page. It's a powerful way to add a layer of "fuzzy" matching to our precise element identification.
Interestingly, this wasn't my first brush with advanced text analysis. A while back, I was developing a media monitoring application – a project that never saw completion, unfortunately. However, during that endeavor, I gained valuable experience in keyword analysis and various other text analysis techniques, such as sentiment analysis (determining the emotional tone of text) and topic modeling (identifying underlying themes in large sets of text). While the context was different, that foundational exposure to parsing and understanding textual data proved surprisingly relevant and helpful in tackling the complexities of robust element identification for Simple Commenter. It's rewarding to see how different coding journeys can converge and provide unexpected insights!
Simple Commenter in 2 minutes