API Doc
Search Docs...
⌘ K
ABSAVE

Other components

TaskServer

TaskServer is ABS's module for handling asynchronous long-running tasks, responsible for managing time-consuming operations such as moving or copying data between storage pools. Similar to Meta, multiple TaskServers form a task cluster, with ZooKeeper electing a single task leader responsible for scheduling and distributing tasks. The remaining TaskServers act as task runners, focusing on executing these long tasks.

When a task leader fails, ABS elects a new task leader to ensure that new long task requests are processed promptly, without affecting the tasks currently in progress. When the task runner encounters an error, the leader detects the task failure and reschedules it to a new available task runner for processing. The new task runner retrieves the current progress status of the task and continues processing it. Additionally, the task runner uses ZooKeeper to ensure that at any given time, no more than one runner is processing each task.

iSCSI redirector

iSCSI redirection program that provides redirection service for iSCSI access. The iSCSI redirector communicates regularly with the Meta to continually update and maintain the list of active Access nodes. When the client's iSCSI initiator sends a login request to the iSCSI redirector, the iSCSI redirector returns an appropriate Access node. The iSCSI initiator logs into this Access node and processes subsequent I/O operations.

The iSCSI redirector is distributed across all physical nodes, and all iSCSI redirectors are logically equivalent. By combining the use of access virtual IP within the cluster, clients can be assured of accessing iSCSI storage through a unified address. The iSCSI redirector only provides iSCSI discovery and login guidance services, and does not handle any I/O requests. Therefore, it will not become a performance bottleneck in the system.

Inspector

The Inspector service is the scanning and status monitoring service in ABS, which periodically checks the status of all data shards within the cluster. It also proactively detects and resolves temporary shard data inconsistency that may occur due to power outages or other failures. Therefore, it provides a more secure guarantee of data consistency.