How I Built a Lightweight AI Workflow for a Targeted Job Search (Part 1: My Network)

February 23, 2026

About a month ago, I was laid off from Vimeo as part of an RIF. I won’t rehash the details here, but the practical outcome was simple: I was suddenly job hunting in a very competitive market.

For the first few weeks, recruiter outreach kept my calendar full. But once that started to taper off, I knew I needed to shift from mostly reactive to intentionally proactive.

At the same time, I’d been spending a lot of my newly free time leveling up how I use AI. I already had experience with different models and tools, but I wanted to get much more deliberate about where AI is actually useful versus where it just adds noise.

This post is the first in a series on that process, focused on one specific problem: how to use AI to run a targeted, network-driven job search.

I started with finding new openings because it’s the very top of the job-search funnel. If this step is slow or inconsistent, every downstream step (prioritizing roles, reaching out to contacts, tailoring outreach, and applying) gets bottlenecked.

Why I Chose a Targeted Approach

I’m not a fan of “spray and pray” applications. It burns time, lowers signal, and usually leads to lower-quality conversations.

Instead, I wanted to prioritize roles at companies where I already know someone. That gives me two advantages:

  • Better context on team culture and working style
  • A higher-quality path into interviews than cold applications alone

In other words, I wanted focus and leverage, not volume.

The Constraints I Had to Work Around

The hard part is doing this at scale without getting sloppy.

LinkedIn doesn’t offer a straightforward public API for this kind of search workflow, and I wasn’t interested in plugging my account into random scraping tools and hoping for the best. I wanted a local setup I could control end to end.

So my constraints were:

  • Keep the workflow local
  • Use my existing browser session/context
  • Automate repetitive collection work without using sketchy tooling

My Setup

I used OpenCode as my local agent harness. I like it because I can switch models easily, and its plan/build workflow matches how I prefer to work.

For this demo, I used GPT-5.3 Codex as the model powering the flow because I’ve been consistently happy with the output quality.

From there, I added Chrome DevTools MCP support in my OpenCode config, then launched Chrome in remote debugging mode so the agent could interact with what I was already viewing in-browser.

Here’s the relevant MCP block from my config:

{
"mcp": {
"chrome_existing": {
"type": "local",
"command": [
"npx",
"-y",
"chrome-devtools-mcp@latest",
"--browserUrl",
"http://127.0.0.1:9222"
],
"enabled": true
}
}
}

I named this MCP entry chrome_existing on purpose: it tells OpenCode to attach to an already-running browser session instead of launching a fresh one. That let me navigate to LinkedIn manually, set up the search exactly how I wanted it, and then run the job from that prepared state. It also reduces brittle automation around login and initial navigation steps.

That gave me a practical middle ground: AI-assisted browser automation inside my own local environment, with visibility into what it’s doing.

The Workflow I Ran

Once connected, I prompted the agent to collect job opportunities in my network that matched a few constraints:

  • Remote-friendly roles
  • Posted within the last 24 hours

Here’s the kickoff prompt I used for this flow:

I have already opened LinkedIn job search results in this browser session.
The current results are filtered to:
- posted in the last 24 hours
- remote roles
- companies where I have network connections
Task:
1) Create a directory at `~/JobHunt/search-results/<YYYY-MM-DD>` using today's date.
2) Scrape all job listings from the current results page.
3) Paginate through all remaining result pages and continue collecting listings.
4) Save a CSV file in that directory named `linkedin-network-remote-jobs.csv`.
For each listing, collect these columns:
- Job Title
- Company
- Job Post URL
- Salary (if listed; otherwise leave blank)
- High-Level Requirements (2-4 concise bullets or a short sentence)
Behavior requirements:
- Click into the job post when needed to capture missing details.
- Avoid duplicates across pages.
- Keep going until there is no next page.
- At the end, print:
- total jobs captured
- number of pages processed
- output file path

Then I had it export the results to CSV so I could review and sort them quickly in Google Sheets.

I also packaged this workflow as a reusable skill in my public GitHub repo: maxcroy1/skills. You can find this one here: linkedin-job-search-export.

Proof-of-Concept Results

On the first run, I captured 151 unique jobs in about 5 minutes.

For a top-of-funnel task, that throughput is meaningful: it turns a tedious manual sweep into a quick, structured input for the rest of the day.

It sounds like a small improvement, but it removed the most repetitive part of the process: manually opening pages, applying the same filters, and copying results into a tracking doc.

Here’s a sanitized example of what the output looked like:

Job Title,Company,Link to post,Salary,High level requirements,Results page
Platform Engineer / Ruby Automation / Chicago / Hybrid,Motion Recruitment,https://www.linkedin.com/jobs/view/4368174400/,$139K/yr - $174K/yr,N/A,1
Principal Member of Technical Staff,Oracle,https://www.linkedin.com/jobs/view/4345650708/,$96.8K/yr - $223.4K/yr,N/A,1
Senior Software Development Engineer (SDE) 3,Oracle,https://www.linkedin.com/jobs/view/4338161232/,$79.2K/yr - $178.1K/yr,Position requires a U.S. citizen and TS/SCI eligibility with poly | BS or MS in CS or similar technical field | 4-8 years of software development experience,1

Extraction quality was mixed in this first pass, especially for the High level requirements field, which was frequently missing or thin.

Why This Was Useful

This workflow didn’t magically “solve” job searching. What it did do was make my effort more consistent and less manual.

Instead of spending energy on repetitive browsing tasks, I could spend that time on higher-value work:

  • Deciding which roles were worth pursuing
  • Reaching out to people in my network with a specific ask
  • Preparing for interviews already in flight

It also reinforced a broader lesson for me: in this phase, AI is most useful as workflow infrastructure, not as a substitute for judgment.

From Proof of Concept to Daily System

This first version was intentionally a proof of concept. I wanted to validate that the workflow was technically feasible and practically useful before investing in more automation.

The next step is reliability: I need this to run automatically and consistently every day, without manual setup.

To do that, I’ll use OpenClaw to orchestrate the flow on a daily schedule. I chose OpenClaw for two reasons: it’s become a popular option for recurring browser-agent workflows, and I want to pressure-test how reliable and valuable I can make it in a practical, recurring task.

The goal is straightforward:

  • Run the same network-focused search at a fixed time each day
  • Apply the same filters consistently so results are comparable over time
  • Export a fresh CSV automatically so I can review only net-new opportunities

There are a few implementation approaches I’m considering to keep this easy to operate:

  • Keep a prep then run mode where I do a quick manual setup, then let automation handle collection and export
  • Add basic run checks (page loaded, expected filters present, CSV written) so failures are visible immediately
  • Use idempotent output conventions (date-based folders and dedupe rules) so reruns don’t create messy data
  • Add a lightweight fallback path to run the same flow manually if the daily job fails

The biggest risk is authentication friction. LinkedIn sessions can expire, login challenges can appear, and automated browser flows can break when auth state changes. So the daily design has to assume occasional auth interruptions and recover gracefully rather than pretending they won’t happen.

I’m still evaluating reliability and guardrails, so I’m intentionally keeping this local and low-volume for now.

A one-off script is useful, but a dependable daily run is what turns this into a system I can trust.

Conclusion

This first pass gave me exactly what I wanted: a practical proof of concept that turns a repetitive top-of-funnel task into something structured and repeatable.

The next milestone is making the workflow truly automatic and dependable day to day, including handling authentication hiccups cleanly. Once that is in place, I want to improve the flow by filtering out companies I’ve already applied to and enriching each result with context about the connection I have at that company.

I’ll update this post once I’ve solved the automation side and can share what held up, what broke, and what actually made the system reliable. I also plan to write a follow-up on how I’ll use AI ethically for connection outreach and resume tailoring.


Disclosure: I wrote this post with AI assistance, starting from stream-of-consciousness dictation.