Skip to content

Infra-warrior/linuxech-Job-agent

Repository files navigation

🚀 Linuxech Sovereign AI Agent (Local Auto-Apply Pipeline)

A fully local, privacy-first, Human-in-the-Loop (HITL) autonomous AI agent that discovers, scores, and applies to jobs on your behalf.

This system runs completely on your local machine using Open-Source Large Language Models (via Ollama), browser-use for autonomous web navigation, and JobSpy for data ingestion. Your data (resumes, work history, PII) never leaves your machine.

⚠️ Disclaimers & Acknowledgments

AI Collaboration: This project was conceptualized and created in collaboration with AI. I am not the sole creator of the underlying code, but rather the architect and orchestrator of this specific pipeline.

Liability & Usage: This software is provided "as is", without warranty of any kind. The creators/authors are NOT liable for any damages, account suspensions, bans, or legal repercussions resulting from the use of this software.

Platform Terms of Service: Automated scraping and applying may violate the Terms of Service of platforms like LinkedIn, Indeed, Glassdoor, etc. You use this tool entirely at your own discretion and risk.

Review Applications: AI can hallucinate. This pipeline utilizes a "Human-in-the-Loop" design so you can review job matches before dispatching the autonomous agent.

🏗 Architecture & Tech Stack

The system is split into three core local components:

The Brain (Local LLM - Ollama): * Uses lightweight, CPU-optimized models.

llama3.2:3b for decision making, scoring job descriptions against your resume, and reasoning.

qwen2.5:3b for structured data extraction and JSON parsing.

The Hands (Python Backend): * FastAPI server.

browser-use (Playwright) to physically navigate DOM elements and fill out forms.

JobSpy to silently scrape job boards without launching heavy browsers.

The Dashboard (React Frontend): * A sleek, terminal-inspired Linuxech interface built with Vite and Tailwind CSS v4.

Manages the "Human-in-the-Loop" inbox, allowing you to Skip or Apply to AI-recommended jobs.

💻 System Requirements

This system is optimized to run on standard consumer hardware, including laptops without dedicated GPUs (e.g., Intel i5, 16GB RAM).

OS: Linux (Ubuntu/Debian recommended) or macOS.

RAM: 16GB Minimum (Allows ~11GB for the LLM and 5GB for OS/Browser).

CPU Inference Note: If running without a dedicated Nvidia GPU/Apple Silicon, Ollama will fall back to your CPU. The browser agent may take 15-30 seconds to "think" between clicks on web pages. This is normal behavior for local CPU inference.

Software: Node.js (v20+), Python (3.10+).

🚀 Installation & Setup

  1. Install System Prerequisites (Ubuntu/Debian)

Ensure you have Node.js v20+ and Python installed:

Update Node.js to v20+ (Required for Vite)

curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt-get install -y nodejs

Install Python and venv

sudo apt install python3 python3-pip python3-venv

  1. Set up the AI Engine (Ollama)

Install Ollama and pull the CPU-optimized models:

Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

Pull the required lightweight models

ollama pull llama3.2:3b ollama pull qwen2.5:3b

  1. Set up the Python Backend

Open a terminal in the project root and set up the backend environment:

cd backend

Setup Virtual Environment

python3 -m venv venv source venv/bin/activate

Install dependencies

pip install -r requirements.txt

Install Playwright browsers (Required for browser-use)

playwright install

(Backend Dependencies: fastapi, uvicorn, python-jobspy, browser-use, langchain-ollama, pydantic, playwright)

  1. Set up the React Frontend

Open a new terminal tab in the project root:

cd frontend

Install Node modules

npm install npm install @tailwindcss/vite tailwindcss lucide-react

⚙️ Usage Instructions

To run the Sovereign Agent, you need to start three separate services. Open three terminal windows:

Terminal 1 (The LLM Service):

ollama serve

(Ensures the AI is listening on port 11434)

Terminal 2 (The Python Backend):

cd backend source venv/bin/activate python3 agent_backend.py

(Runs the FastAPI server on http://localhost:8000)

Terminal 3 (The React Dashboard):

cd frontend npm run dev

(Runs the UI on http://localhost:5173)

Open your web browser to http://localhost:5173. You can now view scraped jobs, read the AI's reasoning for why you are a good match, and click "Apply" to dispatch the headless browser!

🛡️ Security Best Practices

If you fork this repository, ensure your .gitignore is properly configured. NEVER COMMIT:

Your actual resumes (.pdf, .docx)

.env files or API keys

Local SQLite databases (.db)

Browser cookies or Playwright session states

📄 License

MIT License

Copyright (c) 2026

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors