# Ingenimax StarOps Documentation > Official StarOps docs for deploying open‑source LLMs at scale on AWS. Covers setup, deployment, hosting architecture, integrations, multi-user management, and support. # Getting Started https://docs.ingenimax.ai/docs/categories/getting-started/about-starops About StarOps – overview, purpose, and features https://docs.ingenimax.ai/docs/categories/getting-started/prerequisites Prerequisites – AWS, Kubernetes, GPU, IAM, CLI setup # Feedback & Support https://docs.ingenimax.ai/docs/categories/feedback-support/getting-support Getting Support – contacting support team https://docs.ingenimax.ai/docs/categories/feedback-support/providing-feedback Providing Feedback – feature requests & roadmaps # Users & Organizations https://docs.ingenimax.ai/docs/categories/users-organizations/user-settings User Settings – profile & personal preferences https://docs.ingenimax.ai/docs/categories/users-organizations/managing-users Managing Users – organization user management https://docs.ingenimax.ai/docs/categories/users-organizations/managing-users/addusers Add Users – inviting/removing users https://docs.ingenimax.ai/docs/categories/users-organizations/organizations Organizations – org-level settings & billing context # Integrations https://docs.ingenimax.ai/docs/categories/integrations/aws Integrations – AWS (account linking & permissions) https://docs.ingenimax.ai/docs/categories/integrations/github Integrations – GitHub (repo linkage) # Using StarOps https://docs.ingenimax.ai/docs/categories/using-starops/workflows Workflows Overview – using StarOps end-to-end https://docs.ingenimax.ai/docs/categories/using-starops/workflows/model-deployment Model Deployment Workflows – deploying CI/CD-style https://docs.ingenimax.ai/docs/categories/using-starops/workflows/model-deployment/starops-models How StarOps Hosts Models – architecture (KServe, vLLM, GPU, Istio) https://docs.ingenimax.ai/docs/categories/using-starops/workflows/model-deployment/supported-models-gpu-reqs Supported Models & GPU Requirements https://docs.ingenimax.ai/docs/categories/using-starops/workflows/model-deployment/how-it-works How It Works – internal architecture and autoscaling https://docs.ingenimax.ai/docs/categories/using-starops/workflows/model-deployment/models-page Models Page – UI browsing and selection https://docs.ingenimax.ai/docs/categories/using-starops/workflows/model-deployment/deploying-a-model Deploying a Model – step-by-step guide https://docs.ingenimax.ai/docs/categories/using-starops/workflows/model-deployment/opening-a-pr Opening a PR – contribute model definitions https://docs.ingenimax.ai/docs/categories/using-starops/workflows/model-deployment/removing-inference Removing Inference – cleaning up deployed models # Conversations https://docs.ingenimax.ai/docs/categories/using-starops/conversations Conversations – setup chat, streaming, histories # Optional Raw MD Sources # Note: append `.md` to URL to fetch raw plaintext markdown versions of each page.