WearablesNutrition

Methodology

How we test wearables and nutrition apps

Retail hardware, paid app tiers, no review units. Here's how we arrive at the numbers on the compatibility matrix and the scores in the ranking reviews.

Hardware

All reviews and matrix entries are verified on retail hardware that we own. No loaners, no review units, no comped devices. Current test inventory:

Companion phones: iPhone 16 Pro (iOS 18.4) and Pixel 9 (Android 15).

Apps

We test on current versions of the App Store / Play Store / Galaxy Store / visionOS builds at the time of publication. We subscribe to paid tiers (MyFitnessPal Premium, Cronometer Gold, etc.) where the paid tier changes integration behaviour, and we note that in the review.

Compatibility matrix

For each cell of the matrix:

  1. We set up the integration fresh — uninstall the nutrition app, re-install, sign in, enable all relevant HealthKit / Health Connect / vendor-API permissions.
  2. We verify that activity flows wearable → app within 24 hours.
  3. We verify the reverse (calorie target → wearable's companion app) where the app claims two-way sync.
  4. We log the path (native, framework-mediated, vendor API) and the direction of the working flow.
  5. We record a "last verified" date and re-test every 2-3 months or on major OS / vendor release.

Review scores

Scores on ranking pages (Apple Watch, Garmin, Vision Pro) are subjective and weighted:

We don't claim these scores are objectively reproducible. We claim that a week with each app on each device, run in rotation, produces a defensible ordering. The weighting favours accuracy because accuracy is the whole point of tracking.

What we don't do

Corrections

We correct factual errors inline with a dated editorial note. If we've changed a score materially, we'll note it in the article's "What changed" section. Corrections: editors@wearablesnutrition.com.