Troubleshooting

01. Crawler is Blocked (403 Forbidden)

You see "403 Forbidden" or "Security Check" errors in the crawl log. This happens when the target site identifies Vexifa as a bot and denies access.

The Solution

A. Change User Agent: Navigate to Settings → Crawler and change the User Agent to "Googlebot" or a modern browser string.

B. Use Proxies: If your IP is rate-limited, go to Settings → Proxies and enter your proxy credentials. Vexifa will rotate requests to bypass detection.

02. AI / Ollama Connection Failed

Vexifa cannot connect to your local AI engine. You see "Ollama server not found" in the AI Assistant tab.

The Solution
  1. Ensure the Ollama application is running in your system tray.
  2. Run ollama serve in a terminal to manually start the engine.
  3. Verify you have pulled a model: ollama pull llama3.
  4. In Vexifa, ensure the Base URL is set to http://localhost:11434.

03. Windows Defender Alert

Windows displays "Windows protected your PC" or "Unknown Publisher" when launching Vexifa.

The Solution

This is expected for new, independently developed desktop software. To run Vexifa:

  • ✓ Click "More Info" in the blue alert box.
  • ✓ Click "Run Anyway".
Security Tip: You can always verify Vexifa is safe by scanning the .exe at VirusTotal.com before running it.

04. Rank Tracker Failure

Your keyword rankings consistently show "N/A" or "Timeout" for every search query.

The Solution

Rank tracking requires a 3rd party API key to bypass Google's aggressive bot detection. Ensure your SerpApi or Zenserp key is active and has remaining credits in Settings → Integrations.

Free option: Zenserp provides 50 free SERP lookups per month. This is sufficient for tracking up to 50 keywords once per month at no cost.

05. JavaScript Content Not Captured

The crawl results show empty body text or missing headings for pages that load content dynamically via JavaScript (React, Vue, Angular, etc.).

The Solution

By default, Vexifa's crawler is a fast HTTP scraper — it reads the raw HTML response and does not execute JavaScript. To capture JS-rendered content, enable headless browser mode.

  1. Go to Settings → Crawler → Rendering Mode.
  2. Switch from HTTP (Fast) to Headless Browser (JS).
  3. Set a Wait Time (e.g. 1,500ms) — how long to pause after page load for JS to finish rendering.
  4. Click Save and re-run your crawl.
⚠️ Note: Headless mode is significantly slower and more resource-intensive than HTTP mode. Use it only for JS-heavy sites where it is necessary.

06. Google Search Console OAuth Failure

When attempting to connect Google Search Console, the OAuth flow shows "Access Blocked" or the authorization window closes without completing.

The Solution

Vexifa uses a local OAuth redirect (loopback) to receive the GSC authorization token. This process can be blocked by browser security settings or antivirus software.

  1. Ensure no browser extension (especially ad-blockers) is blocking redirects to localhost.
  2. If your default browser is a hardened profile, try completing the OAuth flow in an incognito/private window or a different browser.
  3. Temporarily disable any firewall rule that blocks inbound connections to local ports.
  4. After granting access in Google, click Save Settings in Vexifa before closing the authorization window.

07. Scheduled Crawl Not Running

You configured a crawl schedule, but the crawl history shows no entries and the schedule never executes.

The Solution

Vexifa registers scheduled crawls as Windows Task Scheduler jobs. If the job was not registered correctly or your Windows account lacks the required permissions, the task will silently fail.

  1. Open Windows Task Scheduler (search for it in the Start menu).
  2. Look for a task named VexifaSEO_CrawlSchedule in the Task Scheduler Library.
  3. If missing, open Vexifa, go to Settings → Scheduled Crawls, delete and re-add your schedule, then click Save again to re-register the task.
  4. If the task exists but is not running, right-click it → Properties → General and ensure "Run only when user is logged on" is selected (required for local app execution).

08. Structured Data Validation Errors

The audit flags "Structured Data" issues, showing errors like "Missing required property" or "Invalid type" for your JSON-LD markup.

The Solution

Structured data errors mean your JSON-LD schema is present but invalid according to schema.org specifications. This prevents Google from displaying rich results for those pages.

  1. Click the flagged page in the audit results to see the specific property causing the error.
  2. Copy the page URL and paste it into Google's Rich Results Test (search.google.com/test/rich-results) for a detailed error breakdown.
  3. Common fixes: add a missing @type, ensure datePublished is in ISO 8601 format (YYYY-MM-DD), and verify all required properties for your schema type are present.

Use Vexifa's AI Issue Prioritiser to rank schema errors by estimated business impact — rich result eligibility is a high-value fix worth prioritising.

09. Hreflang Pair Mismatches

The audit reports "Hreflang Error: missing return tag" or "orphaned hreflang" for your multi-language pages.

The Solution

Hreflang requires reciprocal tags — every page that declares a language alternate must be referenced back by the alternate page. A one-way reference is treated as invalid by Google.

To fix: Ensure that if /en/ declares <link rel="alternate" hreflang="fr" href="/fr/">, then /fr/ must declare <link rel="alternate" hreflang="en" href="/en/"> in return. Every page in the cluster must also reference itself with its own language code.

After fixing, re-crawl and re-run the audit. The hreflang checker re-validates all pairs from scratch on each audit run.

10. False Content Freshness Alerts

The Content Freshness Scanner flags pages as "stale" that have been recently updated, or conversely, marks pages as fresh when they are clearly outdated.

The Solution

The Freshness Scanner determines content age from the page's dateModified structured data or, if absent, from the HTTP Last-Modified header. False alerts occur when these signals are absent or incorrect.

False stale (recently updated page flagged as old): Your pages likely lack a dateModified property in their JSON-LD schema. Add an Article or WebPage schema with accurate datePublished and dateModified fields.

False fresh (outdated page not flagged): The Last-Modified header may reflect a CDN cache refresh, not an actual content change. Adding explicit dateModified schema is the authoritative fix.

You can also adjust the Freshness Threshold in Settings → Audit → Freshness Scanner to control how many days of inactivity triggers the alert.