ATD Case Studies

ATD in Diganostic Healthcare

ATD was evaluated on highly heterogeneous and imbalanced medical datasets covering a wide range of imaging modalities and classification tasks, including pathology, CT, MRI, X-ray, ultrasound, endoscopy, and other modalities. In experiments across 30 datasets and 80 independent labels on distributed nodes, ATD achieved an overall accuracy of 92.7%, surpassing centralized learning (84.9%) and swarm learning (72.99%). Conversely, federated learning failed under these conditions due to its high demands on computational resources. ATD also showed excellent scalability, with only a 1% drop in accuracy on existing nodes after expansion, compared to a 20% drop in centralized learning, demonstrating its resilience to catastrophic forgetting. Additionally, it reduced computational costs by up to 50% compared to both centralized and swarm learning, confirming its superior efficiency and scalability.

ATD AI Structure

a) Sample of cloud-based centralised learning paradigm, each client & data and learned weights are stored in the central server.

b) Sample of federated learning paradigm, as only learned weights are shared in the central server. Raw data and computational resources are kept by clients.

c) Sample of swarm learning process, with both data and weight kept locally with clients.

d) Sample of ATD learning process, client can train a local model and share learned weights with each other and keep the raw data locally. Unlike others, ATD operates efficiently at both intra-node and inter-node levels. ATD supports continuous, incremental learning as new data or tasks emerge, while preserving privacy and minimising costly retraining.

Centralized – Requires pooling sensitive data (privacy risk), costly.
Federated – Needs central server, high communication cost.
Swarm – Sequential, blockchain-coordinated (slow).
ATD – Peer-to-peer, faster, cheaper, scalable, adaptable and flexible.