Node AI Rental Service Overview
Node AI provides clients with bespoke access to computational resources such as GPUs, CPUs, RAM, disk storage, and more, encapsulated primarily within Docker containers for dedicated instances. This guide offers an insight into our resource allocation and management approach.
Resources Allocation
-
GPU: Each instance has exclusive access to designated GPUs, preventing performance dips due to shared usage among clients.
-
CPU: CPU resources are proportionally allocated based on the number of GPUs per instance, with burst capabilities available during lower usage periods.
-
RAM: RAM is distributed similarly to CPU resources, allowing temporary excess usage based on system availability and constraints.
-
Disk: Disk storage is fixed upon instance setup, underscoring the need for accurate initial resource estimation.
-
Miscellaneous: Ancillary resources, like shared memory, are assigned in tandem with GPU allocations to maintain a balanced distribution.
Duration and Lifecycle
- Rental agreements outline the operational lifespan of each instance, with an automatic conclusion at the contract's maturity. While extensions may be possible, they are subject to market dynamics and are not guaranteed.
Operating Environment
- Utilizing Linux Docker instances, Node AI supports a broad range of docker images, inclusive of those from private repositories given the correct credentials.
Launch Modes
- Catering to varying client requirements, Node AI offers multiple launch modes:
- Entrypoint/Args: For straightforward container operations.
- SSH: Provides secure remote access.
- Jupyter: Facilitates interactive computing sessions.
Node AI’s platform is engineered to provide a streamlined, adaptable computing service, meeting the needs of varied computational projects from intensive AI/ML tasks to budget-sensitive initiatives.