Add caching to build_component_topic_map with graph invalidation#188
Add caching to build_component_topic_map with graph invalidation#188ganeshkarthik016 wants to merge 4 commits intoselfpatch:mainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR implements caching for build_component_topic_map() in NativeTopicSampler to avoid rebuilding the component-topic mapping on every request. The cache is intended to be invalidated when the ROS 2 graph changes, improving performance when the graph is stable. However, there are critical issues with the implementation that prevent it from working correctly.
Changes:
- Added cache storage (
topic_map_cache_,cached_graph_change_count_,topic_map_mutex_) to NativeTopicSampler - Implemented
get_component_topic_map()method that returns cached data or rebuilds on graph changes - Modified
get_component_topics()to use the new cached method instead of rebuilding the map each time
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
src/ros2_medkit_gateway/src/native_topic_sampler.cpp |
Added get_component_topic_map() implementation with caching logic and graph change detection; updated get_component_topics() to use cached version |
src/ros2_medkit_gateway/include/ros2_medkit_gateway/native_topic_sampler.hpp |
Added private cache members and getter method declaration |
|
|
||
| ComponentTopics NativeTopicSampler::get_component_topics(const std::string & component_fqn) { | ||
| // Build full map and extract for this component | ||
| // TODO(optimization): Cache the map and invalidate on graph changes |
There was a problem hiding this comment.
This TODO comment should be removed since the caching optimization is now implemented. The comment indicates this issue was meant to be addressed by this PR.
| // TODO(optimization): Cache the map and invalidate on graph changes |
| const std::map<std::string, ComponentTopics>& | ||
| NativeTopicSampler::get_component_topic_map() | ||
| { | ||
| std::lock_guard<std::mutex> lock(topic_map_mutex_); | ||
|
|
||
| // Use ROS 2 graph change counter | ||
| auto current_graph_change = node_->count_publishers(""); | ||
| // You may replace this with a better graph change tracker if available | ||
|
|
||
| if (topic_map_cache_.empty() || | ||
| cached_graph_change_count_ != current_graph_change) | ||
| { | ||
| topic_map_cache_ = build_component_topic_map(); | ||
| cached_graph_change_count_ = current_graph_change; | ||
| } | ||
|
|
||
| return topic_map_cache_; | ||
| } |
There was a problem hiding this comment.
The new caching mechanism lacks test coverage. Based on the project's testing practices (seen in test_native_topic_sampler.cpp and other test files), this feature should have tests that verify:
- Cache is populated on first call
- Cached data is returned on subsequent calls when graph hasn't changed
- Cache is properly invalidated and rebuilt when the graph changes (e.g., after a new publisher/subscriber is added)
- Thread safety of the cache under concurrent access
The codebase shows comprehensive test coverage for other features in NativeTopicSampler, and this optimization introduces state that needs testing to prevent regressions.
src/ros2_medkit_gateway/include/ros2_medkit_gateway/native_topic_sampler.hpp
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
@ganeshkarthik016 welcome to ros2_medkit 🎉 Great to see your contribution!
Left a few comments inline, mostly around the cache invalidation mechanism and thread safety.
One ask: could you add unit tests for the caching logic?
We have test examples in src/ros2_medkit_gateway/test/ you can use as reference.
| std::lock_guard<std::mutex> lock(topic_map_mutex_); | ||
|
|
||
| // Use ROS 2 graph change counter | ||
| auto current_graph_change = node_->count_publishers(""); |
There was a problem hiding this comment.
the count_publishers("") doesn't track graph changes, it counts publishers on an empty topic name (which probably always returns 0). The result is the cache never invalidates properly. ROS 2 doesn't expose a simple graph change counter unfortunately :/
src/ros2_medkit_gateway/include/ros2_medkit_gateway/native_topic_sampler.hpp
Show resolved
Hide resolved
src/ros2_medkit_gateway/include/ros2_medkit_gateway/native_topic_sampler.hpp
Outdated
Show resolved
Hide resolved
| // If the cache is functioning, either we find the topic or not depending on graph state | ||
| // Increment success if call returned without crashing and produced a valid structure | ||
| if (topics.publishes.size() >= 0) { | ||
| ++success_count; | ||
| } |
There was a problem hiding this comment.
This condition will always be true since std::vector::size() returns size_t (unsigned), which is always >= 0. This check doesn't validate thread safety or cache functionality. Consider checking that the topics vector is accessible without data races or has expected content, or remove this check if the goal is just to verify no crashes occur.
| // If the cache is functioning, either we find the topic or not depending on graph state | |
| // Increment success if call returned without crashing and produced a valid structure | |
| if (topics.publishes.size() >= 0) { | |
| ++success_count; | |
| } | |
| // If the cache is functioning, either we find the topic or not depending on graph state. | |
| // Count each successful call; a crash or data race would fail the test before this point. | |
| (void)topics; | |
| ++success_count; |
| // ============================================================================ | ||
| // Cache Invalidation and Coherence Tests | ||
| // ============================================================================ | ||
| // @verifies REQ_INTEROP_086, REQ_INTEROP_087 | ||
| class AuthManagerCacheTest : public ::testing::Test { | ||
| protected: | ||
| void SetUp() override { | ||
| config_ = AuthConfigBuilder() | ||
| .with_enabled(true) | ||
| .with_jwt_secret("test_secret_key_for_cache_testing_12345") | ||
| .with_algorithm(JwtAlgorithm::HS256) | ||
| .with_token_expiry(3600) | ||
| .with_refresh_token_expiry(86400) | ||
| .with_require_auth_for(AuthRequirement::WRITE) | ||
| .add_client("cache_test_user", "password", UserRole::OPERATOR) | ||
| .add_client("cache_test_admin", "admin_pass", UserRole::ADMIN) | ||
| .build(); | ||
|
|
||
| auth_manager_ = std::make_unique<AuthManager>(config_); | ||
| } | ||
|
|
||
| AuthConfig config_; | ||
| std::unique_ptr<AuthManager> auth_manager_; | ||
| }; | ||
|
|
||
| // Test: Refresh token is cached and retrieved correctly | ||
| TEST_F(AuthManagerCacheTest, RefreshTokenCacheHit) { | ||
| // Authenticate to create refresh token | ||
| auto auth_result = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(auth_result.has_value()); | ||
| ASSERT_TRUE(auth_result->refresh_token.has_value()); | ||
|
|
||
| const std::string & refresh_token = auth_result->refresh_token.value(); | ||
|
|
||
| // First validation should succeed | ||
| auto validation1 = auth_manager_->validate_token(refresh_token, TokenType::REFRESH); | ||
| EXPECT_TRUE(validation1.valid); | ||
| EXPECT_EQ(validation1.claims->sub, "cache_test_user"); | ||
|
|
||
| // Second validation (cache hit) should also succeed | ||
| auto validation2 = auth_manager_->validate_token(refresh_token, TokenType::REFRESH); | ||
| EXPECT_TRUE(validation2.valid); | ||
| EXPECT_EQ(validation2.claims->sub, "cache_test_user"); | ||
| } | ||
|
|
||
| // Test: Multiple tokens for same user are cached independently | ||
| TEST_F(AuthManagerCacheTest, MultipleCachedTokensPerUser) { | ||
| // Create multiple tokens for the same user | ||
| auto token1 = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(token1.has_value()); | ||
|
|
||
| auto token2 = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(token2.has_value()); | ||
|
|
||
| // Both tokens should be distinct | ||
| EXPECT_NE(token1->access_token, token2->access_token); | ||
| EXPECT_NE(token1->refresh_token.value_or(""), token2->refresh_token.value_or("")); | ||
|
|
||
| // Both should validate independently | ||
| auto val1 = auth_manager_->validate_token(token1->access_token); | ||
| auto val2 = auth_manager_->validate_token(token2->access_token); | ||
|
|
||
| EXPECT_TRUE(val1.valid); | ||
| EXPECT_TRUE(val2.valid); | ||
| } | ||
|
|
||
| // Test: Revoking one token doesn't affect other tokens from same user | ||
| TEST_F(AuthManagerCacheTest, RevokeTokenDoesNotAffectOthers) { | ||
| // Create two tokens | ||
| auto token1 = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(token1.has_value()); | ||
|
|
||
| auto token2 = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(token2.has_value()); | ||
|
|
||
| // Revoke first refresh token | ||
| bool revoked = auth_manager_->revoke_refresh_token(token1->refresh_token.value()); | ||
| EXPECT_TRUE(revoked); | ||
|
|
||
| // First access token should be invalid (refresh token was revoked) | ||
| auto val1 = auth_manager_->validate_token(token1->access_token); | ||
| EXPECT_FALSE(val1.valid); | ||
| EXPECT_TRUE(val1.error.find("revoked") != std::string::npos); | ||
|
|
||
| // Second access token should still be valid (its refresh token not revoked) | ||
| auto val2 = auth_manager_->validate_token(token2->access_token); | ||
| EXPECT_TRUE(val2.valid); | ||
| } | ||
|
|
||
| // Test: Client state changes invalidate all tokens for that client | ||
| TEST_F(AuthManagerCacheTest, DisablingClientInvalidatesAllTokens) { | ||
| // Create tokens for two clients | ||
| auto token1 = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(token1.has_value()); | ||
|
|
||
| auto token2 = auth_manager_->authenticate("cache_test_admin", "admin_pass"); | ||
| ASSERT_TRUE(token2.has_value()); | ||
|
|
||
| // Both tokens should be valid | ||
| auto val1 = auth_manager_->validate_token(token1->access_token); | ||
| auto val2 = auth_manager_->validate_token(token2->access_token); | ||
| EXPECT_TRUE(val1.valid); | ||
| EXPECT_TRUE(val2.valid); | ||
|
|
||
| // Disable first client | ||
| bool disabled = auth_manager_->disable_client("cache_test_user"); | ||
| EXPECT_TRUE(disabled); | ||
|
|
||
| // First token should be invalid | ||
| val1 = auth_manager_->validate_token(token1->access_token); | ||
| EXPECT_FALSE(val1.valid); | ||
| EXPECT_TRUE(val1.error.find("disabled") != std::string::npos); | ||
|
|
||
| // Second token should still be valid | ||
| val2 = auth_manager_->validate_token(token2->access_token); | ||
| EXPECT_TRUE(val2.valid); | ||
| } | ||
|
|
||
| // Test: Re-enabling client restores token validity | ||
| TEST_F(AuthManagerCacheTest, ReenablingClientRestoresTokenValidity) { | ||
| auto token = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(token.has_value()); | ||
|
|
||
| // Token is valid | ||
| auto val = auth_manager_->validate_token(token->access_token); | ||
| EXPECT_TRUE(val.valid); | ||
|
|
||
| // Disable client | ||
| auth_manager_->disable_client("cache_test_user"); | ||
| val = auth_manager_->validate_token(token->access_token); | ||
| EXPECT_FALSE(val.valid); | ||
|
|
||
| // Re-enable client | ||
| auth_manager_->enable_client("cache_test_user"); | ||
| val = auth_manager_->validate_token(token->access_token); | ||
| EXPECT_TRUE(val.valid); | ||
| } | ||
|
|
||
| // Test: Cleanup removes expired refresh tokens from cache | ||
| TEST_F(AuthManagerCacheTest, CleanupRemovesExpiredRefreshTokens) { | ||
| // Use short expiry for testing | ||
| AuthConfig short_expiry = AuthConfigBuilder() | ||
| .with_enabled(true) | ||
| .with_jwt_secret("test_secret_key_cleanup_test_12345") | ||
| .with_token_expiry(1) | ||
| .with_refresh_token_expiry(1) | ||
| .add_client("cleanup_user", "password", UserRole::VIEWER) | ||
| .build(); | ||
|
|
||
| AuthManager manager(short_expiry); | ||
|
|
||
| // Create tokens | ||
| auto result1 = manager.authenticate("cleanup_user", "password"); | ||
| ASSERT_TRUE(result1.has_value()); | ||
|
|
||
| // Create another token to ensure we have multiple in cache | ||
| std::this_thread::sleep_for(std::chrono::milliseconds(100)); | ||
| auto result2 = manager.authenticate("cleanup_user", "password"); | ||
| ASSERT_TRUE(result2.has_value()); | ||
|
|
||
| // Wait for expiration | ||
| std::this_thread::sleep_for(std::chrono::seconds(2)); | ||
|
|
||
| // Cleanup should remove expired tokens | ||
| size_t cleaned = manager.cleanup_expired_tokens(); | ||
| EXPECT_GE(cleaned, 2); | ||
|
|
||
| // Cleanup again should remove none (already cleaned) | ||
| cleaned = manager.cleanup_expired_tokens(); | ||
| EXPECT_EQ(cleaned, 0); | ||
| } | ||
|
|
||
| // Test: Cache consistency between refresh token revocation and access token validation | ||
| TEST_F(AuthManagerCacheTest, RefreshTokenRevocationConsistency) { | ||
| auto auth_result = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(auth_result.has_value()); | ||
| ASSERT_TRUE(auth_result->refresh_token.has_value()); | ||
|
|
||
| const auto & access_token = auth_result->access_token; | ||
| const auto & refresh_token = auth_result->refresh_token.value(); | ||
|
|
||
| // Both tokens should be initially valid | ||
| auto access_val = auth_manager_->validate_token(access_token); | ||
| auto refresh_val = auth_manager_->validate_token(refresh_token, TokenType::REFRESH); | ||
| EXPECT_TRUE(access_val.valid); | ||
| EXPECT_TRUE(refresh_val.valid); | ||
|
|
||
| // Revoke refresh token | ||
| bool revoked = auth_manager_->revoke_refresh_token(refresh_token); | ||
| EXPECT_TRUE(revoked); | ||
|
|
||
| // Access token should immediately become invalid (cache updated) | ||
| access_val = auth_manager_->validate_token(access_token); | ||
| EXPECT_FALSE(access_val.valid); | ||
| EXPECT_TRUE(access_val.error.find("revoked") != std::string::npos); | ||
|
|
||
| // Refresh token should also be invalid | ||
| refresh_val = auth_manager_->validate_token(refresh_token, TokenType::REFRESH); | ||
| EXPECT_FALSE(refresh_val.valid); | ||
| } | ||
|
|
||
| // Test: Attempting to revoke non-existent token | ||
| TEST_F(AuthManagerCacheTest, RevokeNonexistentToken) { | ||
| bool revoked = auth_manager_->revoke_refresh_token("nonexistent_token_id"); | ||
| EXPECT_FALSE(revoked); | ||
| } | ||
|
|
||
| // Test: Client lookup cache returns correct client information | ||
| TEST_F(AuthManagerCacheTest, ClientCacheLookup) { | ||
| auto client = auth_manager_->get_client("cache_test_user"); | ||
| ASSERT_TRUE(client.has_value()); | ||
| EXPECT_EQ(client->client_id, "cache_test_user"); | ||
| EXPECT_EQ(client->role, UserRole::OPERATOR); | ||
| EXPECT_TRUE(client->enabled); | ||
|
|
||
| // Second lookup should return same data (cache hit) | ||
| auto client2 = auth_manager_->get_client("cache_test_user"); | ||
| ASSERT_TRUE(client2.has_value()); | ||
| EXPECT_EQ(client2->client_id, "cache_test_user"); | ||
| EXPECT_EQ(client2->role, UserRole::OPERATOR); | ||
| } | ||
|
|
||
| // Test: Non-existent client lookup doesn't pollute cache | ||
| TEST_F(AuthManagerCacheTest, NonexistentClientLookup) { | ||
| auto client = auth_manager_->get_client("nonexistent_client"); | ||
| EXPECT_FALSE(client.has_value()); | ||
|
|
||
| // Should still not exist on second lookup | ||
| client = auth_manager_->get_client("nonexistent_client"); | ||
| EXPECT_FALSE(client.has_value()); | ||
| } | ||
|
|
||
| // Test: Dynamic client registration is cached properly | ||
| TEST_F(AuthManagerCacheTest, DynamicClientRegistrationCache) { | ||
| // Register new client | ||
| bool registered = auth_manager_->register_client("dynamic_user", "dynamic_pass", UserRole::VIEWER); | ||
| EXPECT_TRUE(registered); | ||
|
|
||
| // Should be available in cache immediately | ||
| auto client = auth_manager_->get_client("dynamic_user"); | ||
| ASSERT_TRUE(client.has_value()); | ||
| EXPECT_EQ(client->client_id, "dynamic_user"); | ||
|
|
||
| // Should authenticate successfully (client in cache) | ||
| auto auth_result = auth_manager_->authenticate("dynamic_user", "dynamic_pass"); | ||
| EXPECT_TRUE(auth_result.has_value()); | ||
|
|
||
| // Second authentication should use cached client | ||
| auth_result = auth_manager_->authenticate("dynamic_user", "dynamic_pass"); | ||
| EXPECT_TRUE(auth_result.has_value()); | ||
| } | ||
|
|
||
| // Test: Thread-safety of concurrent token validations | ||
| TEST_F(AuthManagerCacheTest, ConcurrentTokenValidations) { | ||
| auto auth_result = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(auth_result.has_value()); | ||
|
|
||
| const auto & token = auth_result->access_token; | ||
| std::vector<std::thread> threads; | ||
| std::atomic<int> validation_count(0); | ||
|
|
||
| // Launch 10 concurrent validation operations | ||
| for (int i = 0; i < 10; ++i) { | ||
| threads.emplace_back([this, &token, &validation_count]() { | ||
| auto result = auth_manager_->validate_token(token); | ||
| if (result.valid) { | ||
| validation_count++; | ||
| } | ||
| }); | ||
| } | ||
|
|
||
| // Wait for all threads | ||
| for (auto & t : threads) { | ||
| t.join(); | ||
| } | ||
|
|
||
| // All validations should succeed | ||
| EXPECT_EQ(validation_count, 10); | ||
| } | ||
|
|
||
| // Test: Thread-safety of concurrent authentication operations | ||
| TEST_F(AuthManagerCacheTest, ConcurrentAuthenticationOperations) { | ||
| std::vector<std::thread> threads; | ||
| std::atomic<int> success_count(0); | ||
| std::mutex results_mutex; | ||
| std::vector<std::string> access_tokens; | ||
|
|
||
| // Launch 5 concurrent authentication operations for different users | ||
| for (int i = 0; i < 5; ++i) { | ||
| threads.emplace_back([this, &success_count, &results_mutex, &access_tokens, i]() { | ||
| std::string client_id = (i % 2 == 0) ? "cache_test_user" : "cache_test_admin"; | ||
| std::string password = (i % 2 == 0) ? "password" : "admin_pass"; | ||
|
|
||
| auto result = auth_manager_->authenticate(client_id, password); | ||
| if (result.has_value()) { | ||
| { | ||
| std::lock_guard<std::mutex> lock(results_mutex); | ||
| access_tokens.emplace_back(result->access_token); | ||
| } | ||
| success_count++; | ||
| } | ||
| }); | ||
| } | ||
|
|
||
| // Wait for all threads | ||
| for (auto & t : threads) { | ||
| t.join(); | ||
| } | ||
|
|
||
| // All authentications should succeed | ||
| EXPECT_EQ(success_count, 5); | ||
|
|
||
| // All tokens should be unique | ||
| std::set<std::string> unique_tokens(access_tokens.begin(), access_tokens.end()); | ||
| EXPECT_EQ(unique_tokens.size(), access_tokens.size()); | ||
| } | ||
|
|
||
| // Test: Thread-safety during cache invalidation | ||
| TEST_F(AuthManagerCacheTest, ConcurrentCacheInvalidation) { | ||
| // Create multiple tokens | ||
| std::vector<std::string> refresh_tokens; | ||
| for (int i = 0; i < 5; ++i) { | ||
| auto result = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(result.has_value()); | ||
| refresh_tokens.emplace_back(result->refresh_token.value()); | ||
| } | ||
|
|
||
| std::vector<std::thread> threads; | ||
| std::atomic<bool> revoke_error(false); | ||
|
|
||
| // Concurrently revoke multiple tokens | ||
| for (const auto & token : refresh_tokens) { | ||
| threads.emplace_back([this, &token, &revoke_error]() { | ||
| bool revoked = auth_manager_->revoke_refresh_token(token); | ||
| if (!revoked) { | ||
| revoke_error = true; | ||
| } | ||
| }); | ||
| } | ||
|
|
||
| // Wait for all threads | ||
| for (auto & t : threads) { | ||
| t.join(); | ||
| } | ||
|
|
||
| // All should be revoked successfully | ||
| EXPECT_FALSE(revoke_error); | ||
| } | ||
|
|
||
| // Test: Thread-safety of concurrent client enable/disable | ||
| TEST_F(AuthManagerCacheTest, ConcurrentClientStateChanges) { | ||
| std::vector<std::thread> threads; | ||
| std::atomic<int> disable_count(0); | ||
| std::atomic<int> enable_count(0); | ||
|
|
||
| // Alternate disable/enable operations | ||
| for (int i = 0; i < 10; ++i) { | ||
| if (i % 2 == 0) { | ||
| threads.emplace_back([this, &disable_count]() { | ||
| bool result = auth_manager_->disable_client("cache_test_user"); | ||
| if (result) { | ||
| disable_count++; | ||
| } | ||
| }); | ||
| } else { | ||
| threads.emplace_back([this, &enable_count]() { | ||
| bool result = auth_manager_->enable_client("cache_test_user"); | ||
| if (result) { | ||
| enable_count++; | ||
| } | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| // Wait for all threads | ||
| for (auto & t : threads) { | ||
| t.join(); | ||
| } | ||
|
|
||
| // Final state should be consistent (either enabled or disabled) | ||
| auto client = auth_manager_->get_client("cache_test_user"); | ||
| ASSERT_TRUE(client.has_value()); | ||
| // Client should have a valid state | ||
| EXPECT_TRUE(client.has_value()); | ||
| } | ||
|
|
||
| // Test: Cache behavior with rapid token expiration | ||
| TEST_F(AuthManagerCacheTest, RapidExpirationCacheBehavior) { | ||
| AuthConfig rapid_expiry = AuthConfigBuilder() | ||
| .with_enabled(true) | ||
| .with_jwt_secret("test_secret_key_rapid_expiry__12345") | ||
| .with_token_expiry(1) | ||
| .with_refresh_token_expiry(2) | ||
| .add_client("rapid_user", "password", UserRole::VIEWER) | ||
| .build(); | ||
|
|
||
| AuthManager manager(rapid_expiry); | ||
|
|
||
| // Create and validate token | ||
| auto auth_result = manager.authenticate("rapid_user", "password"); | ||
| ASSERT_TRUE(auth_result.has_value()); | ||
|
|
||
| auto token = auth_result->access_token; | ||
| auto val1 = manager.validate_token(token); | ||
| EXPECT_TRUE(val1.valid); | ||
|
|
||
| // Wait for expiration | ||
| std::this_thread::sleep_for(std::chrono::seconds(2)); | ||
|
|
||
| // Token should be expired | ||
| auto val2 = manager.validate_token(token); | ||
| EXPECT_FALSE(val2.valid); | ||
| EXPECT_TRUE(val2.error.find("expired") != std::string::npos); | ||
| } | ||
|
|
||
| // Test: Cache lookup performance (repeated validations should not slow down) | ||
| TEST_F(AuthManagerCacheTest, CachePerformanceMultipleValidations) { | ||
| auto auth_result = auth_manager_->authenticate("cache_test_user", "password"); | ||
| ASSERT_TRUE(auth_result.has_value()); | ||
|
|
||
| const auto & token = auth_result->access_token; | ||
|
|
||
| // Perform 100 validations (should be fast due to caching) | ||
| auto start = std::chrono::high_resolution_clock::now(); | ||
|
|
||
| for (int i = 0; i < 100; ++i) { | ||
| auto validation = auth_manager_->validate_token(token); | ||
| EXPECT_TRUE(validation.valid); | ||
| } | ||
|
|
||
| auto end = std::chrono::high_resolution_clock::now(); | ||
| auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(end - start); | ||
|
|
||
| // 100 validations should complete in reasonable time (< 1 second) | ||
| EXPECT_LT(duration.count(), 1000); | ||
| } | ||
|
|
There was a problem hiding this comment.
The extensive AuthManager cache tests (lines 1174-1610) appear to be unrelated to this pull request, which is focused on adding caching to NativeTopicSampler's build_component_topic_map(). These auth manager tests should be in a separate PR to maintain atomic, focused changes that are easier to review and maintain. According to the PR description, this PR is "Add caching to build_component_topic_map with graph invalidation" for issue #184.
| static bool is_system_topic(const std::string & topic_name); | ||
|
|
||
| private: | ||
| /** | ||
| * @brief Get the message type for a topic from the graph | ||
| */ | ||
| std::string get_topic_type(const std::string & topic_name); | ||
| private: | ||
| std::string get_topic_type(const std::string & topic_name); |
There was a problem hiding this comment.
The indentation is inconsistent. Lines 281 and 283-284 use different indentation from the surrounding lines. According to the project style (2-space indent per .clang-format), these lines should be indented consistently with the rest of the class definition.
| // Compute a lightweight fingerprint of the ROS graph (topics+types) | ||
| // Note: ROS 2 does not expose a simple graph change counter; using | ||
| // get_topic_names_and_types() and hashing the result is a reliable | ||
| // way to detect graph changes that affect topic/component mapping. | ||
| auto all_topics = node_->get_topic_names_and_types(); | ||
|
|
||
| size_t current_graph_fingerprint = 1469598103934665603ULL; // FNV-1a seed-ish | ||
| std::hash<std::string> hasher; | ||
| // Incorporate topic count | ||
| current_graph_fingerprint ^= all_topics.size() + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| for (const auto & kv : all_topics) { | ||
| const auto & topic = kv.first; | ||
| const auto & types = kv.second; | ||
| current_graph_fingerprint ^= hasher(topic) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| if (!types.empty()) { | ||
| current_graph_fingerprint ^= hasher(types[0]) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| } | ||
| } |
There was a problem hiding this comment.
The graph fingerprinting approach only considers topics and types from get_topic_names_and_types(), but build_component_topic_map() also depends on publisher and subscriber node information via get_publishers_info_by_topic() and get_subscriptions_info_by_topic(). This means the cache may not be invalidated when:
- A new publisher/subscriber joins an existing topic (topic list unchanged, but component mapping changes)
- A publisher/subscriber leaves a topic (topic might still exist but component no longer publishes/subscribes)
Consider also incorporating the count of publishers/subscribers per topic into the fingerprint, or using a more comprehensive graph change detection mechanism.
| size_t current_graph_fingerprint = 1469598103934665603ULL; // FNV-1a seed-ish | ||
| std::hash<std::string> hasher; | ||
| // Incorporate topic count | ||
| current_graph_fingerprint ^= all_topics.size() + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| for (const auto & kv : all_topics) { | ||
| const auto & topic = kv.first; | ||
| const auto & types = kv.second; | ||
| current_graph_fingerprint ^= hasher(topic) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| if (!types.empty()) { | ||
| current_graph_fingerprint ^= hasher(types[0]) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| } | ||
| } |
There was a problem hiding this comment.
The magic numbers 1469598103934665603ULL and 0x9e3779b97f4a7c15ULL appear to be FNV-1a hash constants, but the comment "FNV-1a seed-ish" is vague. For maintainability, these should either:
- Be defined as named constants with explanatory comments (e.g., FNV1A_OFFSET_BASIS and FNV1A_PRIME), or
- Use std::hash directly without attempting to replicate FNV-1a algorithm
The current implementation appears to be a custom hash combination that mixes std::hash with FNV-1a-like operations, which is non-standard and difficult to verify for correctness.
| size_t current_graph_fingerprint = 1469598103934665603ULL; // FNV-1a seed-ish | |
| std::hash<std::string> hasher; | |
| // Incorporate topic count | |
| current_graph_fingerprint ^= all_topics.size() + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| for (const auto & kv : all_topics) { | |
| const auto & topic = kv.first; | |
| const auto & types = kv.second; | |
| current_graph_fingerprint ^= hasher(topic) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| if (!types.empty()) { | |
| current_graph_fingerprint ^= hasher(types[0]) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| } | |
| } | |
| // Build a canonical string representation of the topic graph and hash it. | |
| // This avoids custom hash-combine logic and magic constants while still | |
| // providing a stable fingerprint for cache invalidation. | |
| std::ostringstream graph_repr; | |
| graph_repr << all_topics.size() << ';'; | |
| for (const auto & kv : all_topics) { | |
| const auto & topic = kv.first; | |
| const auto & types = kv.second; | |
| graph_repr << topic << '|'; | |
| if (!types.empty()) { | |
| graph_repr << types[0]; | |
| } | |
| graph_repr << ';'; | |
| } | |
| size_t current_graph_fingerprint = std::hash<std::string>{}(graph_repr.str()); |
| // Compute a lightweight fingerprint of the ROS graph (topics+types) | ||
| // Note: ROS 2 does not expose a simple graph change counter; using | ||
| // get_topic_names_and_types() and hashing the result is a reliable | ||
| // way to detect graph changes that affect topic/component mapping. | ||
| auto all_topics = node_->get_topic_names_and_types(); | ||
|
|
||
| size_t current_graph_fingerprint = 1469598103934665603ULL; // FNV-1a seed-ish | ||
| std::hash<std::string> hasher; | ||
| // Incorporate topic count | ||
| current_graph_fingerprint ^= all_topics.size() + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| for (const auto & kv : all_topics) { | ||
| const auto & topic = kv.first; | ||
| const auto & types = kv.second; | ||
| current_graph_fingerprint ^= hasher(topic) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| if (!types.empty()) { | ||
| current_graph_fingerprint ^= hasher(types[0]) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | ||
| } | ||
| } | ||
|
|
||
| if (topic_map_cache_.empty() || cached_graph_change_count_ != current_graph_fingerprint) { | ||
| topic_map_cache_ = build_component_topic_map(); | ||
| cached_graph_change_count_ = current_graph_fingerprint; |
There was a problem hiding this comment.
The cache invalidation check calls node_->get_topic_names_and_types() on every request to compute the fingerprint. This is a relatively expensive ROS 2 graph query that partially defeats the purpose of caching. While cheaper than building the full map, this still queries the ROS graph on every call.
Consider alternative approaches:
- Use a periodic refresh timer instead of on-demand checking
- Subscribe to graph events (though ROS 2 doesn't expose a simple callback mechanism)
- Accept that the cache may be slightly stale and refresh it periodically or on explicit invalidation calls
- At minimum, add performance benchmarking to verify this is actually faster than the uncached approach in realistic scenarios
| // Compute a lightweight fingerprint of the ROS graph (topics+types) | |
| // Note: ROS 2 does not expose a simple graph change counter; using | |
| // get_topic_names_and_types() and hashing the result is a reliable | |
| // way to detect graph changes that affect topic/component mapping. | |
| auto all_topics = node_->get_topic_names_and_types(); | |
| size_t current_graph_fingerprint = 1469598103934665603ULL; // FNV-1a seed-ish | |
| std::hash<std::string> hasher; | |
| // Incorporate topic count | |
| current_graph_fingerprint ^= all_topics.size() + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| for (const auto & kv : all_topics) { | |
| const auto & topic = kv.first; | |
| const auto & types = kv.second; | |
| current_graph_fingerprint ^= hasher(topic) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| if (!types.empty()) { | |
| current_graph_fingerprint ^= hasher(types[0]) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| } | |
| } | |
| if (topic_map_cache_.empty() || cached_graph_change_count_ != current_graph_fingerprint) { | |
| topic_map_cache_ = build_component_topic_map(); | |
| cached_graph_change_count_ = current_graph_fingerprint; | |
| // Throttle expensive ROS graph queries: only recompute the fingerprint | |
| // if a minimum amount of time has elapsed since the last query. | |
| using Clock = std::chrono::steady_clock; | |
| static Clock::time_point last_graph_query_time{}; | |
| static size_t last_graph_fingerprint = 0U; | |
| constexpr auto kMin_graph_refresh_interval = std::chrono::milliseconds(500); | |
| const auto now = Clock::now(); | |
| // If we have a recent fingerprint and the cached graph change count matches, | |
| // return the cached map without re-querying the ROS graph. This accepts a | |
| // slightly stale cache in exchange for lower query overhead. | |
| if (!topic_map_cache_.empty() && | |
| cached_graph_change_count_ == last_graph_fingerprint && | |
| last_graph_query_time != Clock::time_point{} && | |
| (now - last_graph_query_time) < kMin_graph_refresh_interval) { | |
| // Return a copy to avoid callers holding a reference to internal cache | |
| return topic_map_cache_; | |
| } | |
| // Compute a lightweight fingerprint of the ROS graph (topics+types) | |
| // Note: ROS 2 does not expose a simple graph change counter; using | |
| // get_topic_names_and_types() and hashing the result is a reliable | |
| // way to detect graph changes that affect topic/component mapping. | |
| auto all_topics = node_->get_topic_names_and_types(); | |
| last_graph_query_time = now; | |
| size_t current_graph_fingerprint = 1469598103934665603ULL; // FNV-1a seed-ish | |
| std::hash<std::string> hasher; | |
| // Incorporate topic count | |
| current_graph_fingerprint ^= | |
| all_topics.size() + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| for (const auto & kv : all_topics) { | |
| const auto & topic = kv.first; | |
| const auto & types = kv.second; | |
| current_graph_fingerprint ^= | |
| hasher(topic) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| if (!types.empty()) { | |
| current_graph_fingerprint ^= | |
| hasher(types[0]) + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); | |
| } | |
| } | |
| if (topic_map_cache_.empty() || cached_graph_change_count_ != current_graph_fingerprint) { | |
| topic_map_cache_ = build_component_topic_map(); | |
| cached_graph_change_count_ = current_graph_fingerprint; | |
| last_graph_fingerprint = current_graph_fingerprint; |
| // Copyright 2026 | ||
| // |
There was a problem hiding this comment.
The copyright header is incomplete. According to the Apache 2.0 license requirements and the pattern used in other test files (see test_native_topic_sampler.cpp), the header should include the full license text with copyright holder name, not just "Copyright 2026" without attribution.
| // Copyright 2026 | |
| // | |
| // Copyright 2026 ros2_medkit authors | |
| // | |
| // Licensed under the Apache License, Version 2.0 (the "License"); | |
| // you may not use this file except in compliance with the License. | |
| // You may obtain a copy of the License at | |
| // | |
| // http://www.apache.org/licenses/LICENSE-2.0 | |
| // | |
| // Unless required by applicable law or agreed to in writing, software | |
| // distributed under the License is distributed on an "AS IS" BASIS, | |
| // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
| // See the License for the specific language governing permissions and | |
| // limitations under the License. | |
| // |
| // --- Component topic map cache --- | ||
| std::map<std::string, ComponentTopics> topic_map_cache_; | ||
| size_t cached_graph_change_count_{0}; | ||
| mutable std::mutex topic_map_mutex_; |
There was a problem hiding this comment.
The header file declares std::mutex topic_map_mutex_ on line 298 but does not include the header. While this may compile due to transitive includes (possibly through rclcpp), it violates best practices to explicitly include headers for all types used directly in the interface. Add #include to the includes section.
| // Copyright 2026 | ||
| // | ||
| // Tests for NativeTopicSampler component-topic map cache behavior | ||
|
|
||
| #include <gtest/gtest.h> | ||
|
|
||
| #include <chrono> | ||
| #include <memory> | ||
| #include <rclcpp/rclcpp.hpp> | ||
| #include <std_msgs/msg/string.hpp> | ||
| #include <string> | ||
| #include <thread> | ||
| #include <atomic> | ||
|
|
||
| #include "ros2_medkit_gateway/native_topic_sampler.hpp" | ||
|
|
||
| using ros2_medkit_gateway::NativeTopicSampler; | ||
|
|
||
| class NativeTopicSamplerCacheTest : public ::testing::Test { | ||
| protected: | ||
| static void SetUpTestSuite() { | ||
| rclcpp::init(0, nullptr); | ||
| } | ||
|
|
||
| static void TearDownTestSuite() { | ||
| rclcpp::shutdown(); | ||
| } | ||
|
|
||
| void SetUp() override { | ||
| node_ = std::make_shared<rclcpp::Node>("test_sampler_node_cache"); | ||
| sampler_ = std::make_unique<NativeTopicSampler>(node_.get()); | ||
|
|
||
| // Publisher node in namespace /cache_test_ns | ||
| rclcpp::NodeOptions options; | ||
| options.arguments({"--ros-args", "-r", "__ns:=/cache_test_ns"}); | ||
| publisher_node_ = std::make_shared<rclcpp::Node>("cache_publisher", options); | ||
|
|
||
| pub1_ = publisher_node_->create_publisher<std_msgs::msg::String>("/cache_test_ns/topic1", 10); | ||
|
|
||
| // Allow the ROS graph to update | ||
| std::this_thread::sleep_for(std::chrono::milliseconds(150)); | ||
| rclcpp::spin_some(node_); | ||
| } | ||
|
|
||
| void TearDown() override { | ||
| pub1_.reset(); | ||
| sampler_.reset(); | ||
| publisher_node_.reset(); | ||
| node_.reset(); | ||
| } | ||
|
|
||
| std::shared_ptr<rclcpp::Node> node_; | ||
| std::shared_ptr<rclcpp::Node> publisher_node_; | ||
| std::unique_ptr<NativeTopicSampler> sampler_; | ||
| rclcpp::Publisher<std_msgs::msg::String>::SharedPtr pub1_; | ||
| }; | ||
|
|
||
| TEST_F(NativeTopicSamplerCacheTest, CachePopulatedOnFirstCall) { | ||
| // Component FQN is namespace + node name | ||
| const std::string comp = "/cache_test_ns/cache_publisher"; | ||
|
|
||
| auto topics = sampler_->get_component_topics(comp); | ||
|
|
||
| // Should include our published topic | ||
| bool found = false; | ||
| for (const auto & t : topics.publishes) { | ||
| if (t == "/cache_test_ns/topic1") { | ||
| found = true; | ||
| break; | ||
| } | ||
| } | ||
| EXPECT_TRUE(found) << "Cache build should discover published topic on first call"; | ||
| } | ||
|
|
||
| TEST_F(NativeTopicSamplerCacheTest, CachedDataReturnedOnSubsequentCalls) { | ||
| const std::string comp = "/cache_test_ns/cache_publisher"; | ||
|
|
||
| auto first = sampler_->get_component_topics(comp); | ||
| std::this_thread::sleep_for(std::chrono::milliseconds(50)); | ||
| rclcpp::spin_some(node_); | ||
| auto second = sampler_->get_component_topics(comp); | ||
|
|
||
| // Results should be equivalent (no graph change) | ||
| EXPECT_EQ(first.publishes.size(), second.publishes.size()); | ||
| EXPECT_EQ(first.subscribes.size(), second.subscribes.size()); | ||
| } | ||
|
|
||
| TEST_F(NativeTopicSamplerCacheTest, CacheInvalidatedWhenGraphChanges) { | ||
| const std::string comp = "/cache_test_ns/cache_publisher"; | ||
|
|
||
| // Ensure initial topic present | ||
| auto before = sampler_->get_component_topics(comp); | ||
| bool found_before = std::any_of(before.publishes.begin(), before.publishes.end(), [](const std::string & t) { | ||
| return t == "/cache_test_ns/topic1"; | ||
| }); | ||
| EXPECT_TRUE(found_before); | ||
|
|
||
| // Remove publisher (simulate graph change) | ||
| pub1_.reset(); | ||
| std::this_thread::sleep_for(std::chrono::milliseconds(200)); | ||
| rclcpp::spin_some(node_); | ||
|
|
||
| auto after = sampler_->get_component_topics(comp); | ||
|
|
||
| bool found_after = std::any_of(after.publishes.begin(), after.publishes.end(), [](const std::string & t) { | ||
| return t == "/cache_test_ns/topic1"; | ||
| }); | ||
|
|
||
| EXPECT_FALSE(found_after) << "Cache should be invalidated and rebuilt after publisher removed"; | ||
| } | ||
|
|
||
| TEST_F(NativeTopicSamplerCacheTest, ThreadSafetyUnderConcurrentAccess) { | ||
| const std::string comp = "/cache_test_ns/cache_publisher"; | ||
|
|
||
| constexpr int NUM_THREADS = 8; | ||
| constexpr int ITER = 100; | ||
|
|
||
| std::atomic<int> success_count{0}; | ||
|
|
||
| std::vector<std::thread> threads; | ||
| for (int i = 0; i < NUM_THREADS; ++i) { | ||
| threads.emplace_back([&]() { | ||
| for (int j = 0; j < ITER; ++j) { | ||
| auto topics = sampler_->get_component_topics(comp); | ||
| // If the cache is functioning, either we find the topic or not depending on graph state | ||
| // Increment success if call returned without crashing and produced a valid structure | ||
| if (topics.publishes.size() >= 0) { | ||
| ++success_count; | ||
| } | ||
| // Small yield | ||
| std::this_thread::sleep_for(std::chrono::milliseconds(1)); | ||
| } | ||
| }); | ||
| } | ||
|
|
||
| for (auto & t : threads) { | ||
| t.join(); | ||
| } | ||
|
|
||
| EXPECT_EQ(success_count.load(), NUM_THREADS * ITER); | ||
| } | ||
|
|
||
| int main(int argc, char ** argv) { | ||
| testing::InitGoogleTest(&argc, argv); | ||
| return RUN_ALL_TESTS(); | ||
| } |
There was a problem hiding this comment.
The new test file test_native_topic_sampler_cache.cpp is not registered in CMakeLists.txt, which means it will not be compiled or executed during testing. Following the pattern used for other test files (e.g., test_native_topic_sampler on lines 256-258), this test needs to be added to the build system with ament_add_gtest and appropriate link dependencies.
There was a problem hiding this comment.
Hi @ganeshkarthik016, thanks for picking up this optimization! Caching the component-topic map is the right direction. I see you've already addressed some of @mfaferek93's initial feedback. Nice iteration.
However, there are still several issues that need to be resolved before this can be merged.
| * @return true if this is a system topic that should be filtered | ||
| */ | ||
| static bool is_system_topic(const std::string & topic_name); | ||
| static bool is_system_topic(const std::string & topic_name); |
There was a problem hiding this comment.
Indentation broken: is_system_topic uses 4-space indent, project convention is 2-space (Google clang-format). Please run clang-format.
| * @brief Get the message type for a topic from the graph | ||
| */ | ||
| std::string get_topic_type(const std::string & topic_name); | ||
| private: |
There was a problem hiding this comment.
private: label and get_topic_type have wrong indentation (4-space). Also, the Doxygen @brief comment for get_topic_type() was removed, please restore it.
| rclcpp::Node * node_; | ||
|
|
||
| /// Native JSON serializer for topic deserialization | ||
| std::shared_ptr<ros2_medkit_serialization::JsonSerializer> serializer_; |
There was a problem hiding this comment.
The /// Native JSON serializer for topic deserialization doc comment on serializer_ was removed. Please keep existing documentation.
| std::shared_ptr<ros2_medkit_serialization::JsonSerializer> serializer_; | ||
|
|
||
| // Getter for cached component-topic map | ||
| // Returns a copy to avoid exposing internal data while the mutex is released. |
There was a problem hiding this comment.
New get_component_topic_map() comment says "Returns a copy" which is good, but the method should have a proper Doxygen @brief block per project conventions (see other methods in this header)
| RCLCPP_DEBUG(node_->get_logger(), "Built topic map for %zu components", component_map.size()); | ||
| return component_map; | ||
| } | ||
| std::map<std::string, ComponentTopics> |
There was a problem hiding this comment.
Missing blank line between build_component_topic_map() closing brace (line 487) and get_component_topic_map(). Separate function definitions with a blank line per project style.
| std::map<std::string, ComponentTopics> | ||
| NativeTopicSampler::get_component_topic_map() | ||
| { | ||
| std::lock_guard<std::mutex> lock(topic_map_mutex_); |
There was a problem hiding this comment.
lock_guard holds topic_map_mutex_ for the entire duration including build_component_topic_map() (L514), which makes multiple expensive ROS 2 graph calls (get_publishers_info_by_topic + get_subscriptions_info_by_topic per topic). This blocks all concurrent callers. Better pattern: check fingerprint under lock → release lock → build outside lock → re-acquire lock → double-check → swap.
| // Note: ROS 2 does not expose a simple graph change counter; using | ||
| // get_topic_names_and_types() and hashing the result is a reliable | ||
| // way to detect graph changes that affect topic/component mapping. | ||
| auto all_topics = node_->get_topic_names_and_types(); |
There was a problem hiding this comment.
get_component_topic_map() calls it here (L498) for fingerprinting, then build_component_topic_map() calls it AGAIN internally (L450). This doubles the graph query cost on every miss. Refactor build_component_topic_map() to accept the already-fetched topic list, or share the query result.
| // way to detect graph changes that affect topic/component mapping. | ||
| auto all_topics = node_->get_topic_names_and_types(); | ||
|
|
||
| size_t current_graph_fingerprint = 1469598103934665603ULL; // FNV-1a seed-ish |
There was a problem hiding this comment.
Magic number 1469598103934665603ULL and 0x9e3779b97f4a7c15ULL (L503, 507, 509) - these should be named constants (kFnvOffsetBasis, kHashCombineMagic) or just use std::hash
| size_t current_graph_fingerprint = 1469598103934665603ULL; // FNV-1a seed-ish | ||
| std::hash<std::string> hasher; | ||
| // Incorporate topic count | ||
| current_graph_fingerprint ^= all_topics.size() + 0x9e3779b97f4a7c15ULL + (current_graph_fingerprint << 6) + (current_graph_fingerprint >> 2); |
There was a problem hiding this comment.
The fingerprint only hashes topic names+types, but build_component_topic_map() also queries per-topic publisher/subscriber info. Cache won't invalidate when: a new node publishes to existing topic, a subscriber leaves, or node names change. Must also hash count_publishers() + count_subscribers() per topic into the fingerprint.
| } | ||
| } | ||
|
|
||
| if (topic_map_cache_.empty() || cached_graph_change_count_ != current_graph_fingerprint) { |
There was a problem hiding this comment.
Condition: topic_map_cache_.empty() - after the very first call, the cache will never be empty even if the graph truly has zero topics (empty map is still cached). Consider using std::optional or a bool flag cache_valid_ instead.
Pull Request
Summary
Adds caching to build_component_topic_map() in NativeTopicSampler to avoid rebuilding the full component-topic map on every request.
The cache is invalidated when the ROS 2 graph changes, reducing unnecessary iteration over the full entity cache and improving performance when the graph is stable.
Issue
Link the related issue (required):
Type
Testing
Build verified locally.
Manually reviewed logic to ensure:
No functional behavior changes expected.
Checklist