Skip to content

Add AppleIntelligence support#108

Open
rawnly wants to merge 6 commits intojnsahaj:mainfrom
rawnly:feature/add-apple-intelligence-backend
Open

Add AppleIntelligence support#108
rawnly wants to merge 6 commits intojnsahaj:mainfrom
rawnly:feature/add-apple-intelligence-backend

Conversation

@rawnly
Copy link

@rawnly rawnly commented Jan 13, 2026

This pull-request introduces support for apple-intelligence as local ai. This is done by adding a new backend and using fm-bindings to interact with the local model.

I think it needs some tweaks on the prompt, the quality of the output it's not exactly the best atm. I had to perform some workarounds to avoid it printing raw markdown in the commit

Summary by CodeRabbit

  • New Features

    • Added macOS-only "Apple Intelligence" local provider for Apple Silicon, selectable as a local backend.
  • Bug Fixes / UX

    • Provider-specific prompts on macOS with improved messaging; model prompt skipped when not applicable to avoid unnecessary input.
  • Documentation

    • Platform requirements updated: Apple Intelligence requires recent macOS on Apple Silicon (Sequoia 15.1+).

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 13, 2026

📝 Walkthrough

Walkthrough

Adds macOS (aarch64) Apple Intelligence support: a target-specific fm-bindings dependency, a gated ProviderType and provider entry, configure command branching for macOS providers, and a provider backend using fm_bindings::LanguageModelSession to produce completions.

Changes

Cohort / File(s) Summary
Dependency Management
Cargo.toml
Added target-specific dependency section for macOS/aarch64 with fm-bindings = { version = "0.1.3", optional = true }.
Provider Type Definition
src/config/cli.rs
Added macOS-gated AppleIntelligence variant to ProviderType and FromStr parsing for "apple-intelligence".
Provider Metadata
src/config/providers.rs
Inserted macOS-gated ProviderInfo entry for apple-intelligence into ALL_PROVIDERS.
Interactive Configuration
src/command/configure.rs
Added macOS-specific prompt messaging and skip behavior for API key/model name when provider is AppleIntelligence.
Provider Implementation
src/provider/mod.rs
Added macOS-gated AppleIntelligence backend, conditional import of LanguageModelSession, new ProviderError variants (FmError, TaskError), init branch, completion flow using a blocking task calling fm_bindings, and model-name mapping.
Docs
README.md
Added Platform Requirements note about macOS Sequoia 15.1+ on Apple Silicon for Apple Intelligence features.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant LumenProvider
    participant LanguageModelSession
    participant macOS_runtime as macOS

    Client->>LumenProvider: request completion (AppleIntelligence)
    LumenProvider->>LumenProvider: spawn blocking task
    LumenProvider->>LanguageModelSession: create session (system + user prompts)
    LanguageModelSession->>macOS_runtime: call fm_bindings runtime
    macOS_runtime-->>LanguageModelSession: return completion
    LanguageModelSession-->>LumenProvider: deliver response
    LumenProvider-->>Client: return completion result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • jnsahaj

Poem

🐰✨ On Apple silicon I leap and sing,
bindings wake and sessions bring.
Prompts and hops in twilight bright,
answers bloom in macOS light. 🥕🍎

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately and concisely summarizes the main change: adding Apple Intelligence support as a new backend provider to the project.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@rawnly rawnly force-pushed the feature/add-apple-intelligence-backend branch from 23a0f6d to ef14b90 Compare January 13, 2026 09:00
@rawnly rawnly marked this pull request as ready for review January 13, 2026 09:35
Copilot AI review requested due to automatic review settings January 13, 2026 09:35
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @Cargo.toml:
- Line 53: Remove fm-bindings from the unconditional [dependencies] section and
add it under a macOS-only target-specific dependencies block so it only builds
on macOS; specifically delete the line "fm-bindings = \"0.1.3\"" from the main
dependencies and add a new section [target.'cfg(target_os =
\"macos\")'.dependencies] with fm-bindings = "0.1.3" (the crate is only used
behind #[cfg(target_os = "macos")] guards in the code).

In @src/provider/mod.rs:
- Around line 161-173: The code is calling blocking fm-bindings APIs directly in
async code: wrap the synchronous LanguageModelSession::with_instructions(...)
and the subsequent session.response(...) calls inside
tokio::task::spawn_blocking to offload them to the blocking thread pool (use
.await on the JoinHandle and map errors to ProviderError::FmError), and ensure
you move needed values (e.g., the constructed system_prompt and
prompt.user_prompt) into the closure; also remove or make the hardcoded "\ndo
not use markdown syntax" suffix configurable (e.g., derive it from prompt or a
flag) instead of always appending it in the ProviderBackend::AppleIntelligence
branch.
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dd570ed and 2d6f472.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • Cargo.toml
  • src/command/configure.rs
  • src/config/cli.rs
  • src/config/providers.rs
  • src/provider/mod.rs
🔇 Additional comments (10)
src/config/cli.rs (2)

53-54: LGTM!

The AppleIntelligence variant is correctly gated with #[cfg(target_os = "macos")], ensuring it's only available on macOS builds.


72-73: LGTM!

The FromStr implementation correctly parses "apple-intelligence" with consistent macOS gating.

src/command/configure.rs (2)

65-76: LGTM!

The provider-specific messaging for local providers (Apple Intelligence and Ollama) is well-structured with appropriate macOS gating.


99-102: LGTM!

Early-exiting for Apple Intelligence makes sense since it doesn't support model selection.

src/config/providers.rs (1)

20-27: LGTM!

The Apple Intelligence provider entry is correctly configured with empty env_key (no API key needed) and empty default_model (not user-selectable). The #[cfg(target_os = "macos")] guard ensures this entry only exists on macOS builds.

src/provider/mod.rs (5)

1-2: LGTM!

The conditional import for LanguageModelSession is correctly gated for macOS.


20-22: LGTM!

The error variant properly wraps fm_bindings::Error with the #[from] attribute for ergonomic error conversion.


34-41: LGTM!

The AppleIntelligence variant is a unit variant (no associated data), which is appropriate since no client configuration is needed for local inference.


63-71: LGTM!

The initialization correctly returns the AppleIntelligence backend with the display name from provider info.


208-209: LGTM!

Returning "Apple Intelligence" as the model name for display purposes is appropriate since there's no user-selectable model.

@rawnly rawnly force-pushed the feature/add-apple-intelligence-backend branch from 365f90d to 6e65847 Compare January 13, 2026 10:42
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @src/provider/mod.rs:
- Around line 161-177: The AppleIntelligence branch currently uses .expect() on
spawn_blocking().await which can panic; add a new ProviderError variant (e.g.,
TaskJoinError(JoinError)) to ProviderError, update the
ProviderBackend::AppleIntelligence block to convert the JoinError into that
ProviderError instead of calling .expect(), and propagate that error (use
.await.map_err(|e| ProviderError::TaskJoinError(e))? or equivalent) so the
JoinError is mapped into ProviderError and returned rather than panicking.
🧹 Nitpick comments (2)
src/provider/mod.rs (2)

20-22: Consider a more specific error message.

The error message "AI request failed: {0}" is identical to GenAIError. Consider differentiating it for easier debugging, e.g., "Apple Intelligence request failed: {0}".

Suggested change
     #[cfg(target_os = "macos")]
-    #[error("AI request failed: {0}")]
+    #[error("Apple Intelligence request failed: {0}")]
     FmError(#[from] fm_bindings::Error),

209-215: Consider the resulting display format.

Since provider_name is also set to "Apple Intelligence" (from defaults.display_name), the Display impl on line 220 will output "Apple Intelligence (Apple Intelligence)" which is redundant. Consider returning something like "Local Model" or an empty string for the model.

Suggested change
             #[cfg(target_os = "macos")]
-            ProviderBackend::AppleIntelligence => "Apple Intelligence".to_string(),
+            ProviderBackend::AppleIntelligence => "Local Model".to_string(),
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 365f90d and 6e65847.

📒 Files selected for processing (2)
  • Cargo.toml
  • src/provider/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • Cargo.toml
🧰 Additional context used
🧬 Code graph analysis (1)
src/provider/mod.rs (1)
src/config/providers.rs (1)
  • for_provider (102-107)
🔇 Additional comments (3)
src/provider/mod.rs (3)

1-2: LGTM!

The conditional import is correctly gated to macOS only, consistent with the platform-specific nature of Apple Intelligence.


34-42: LGTM!

The AppleIntelligence variant is appropriately added as a unit variant since it doesn't require API keys or model configuration. The conditional compilation is correctly applied.


63-71: LGTM!

The implementation correctly follows the existing pattern. Ignoring api_key and model parameters is appropriate since Apple Intelligence uses the local model without external configuration. ProviderType::AppleIntelligence is properly defined in ALL_PROVIDERS, so ProviderInfo::for_provider will locate it without issue.

@jnsahaj
Copy link
Owner

jnsahaj commented Jan 14, 2026

This is super interesting. I'll take some time to test it out this weekend and might add some changes.
Thanks!

@rawnly
Copy link
Author

rawnly commented Jan 14, 2026

It might need some prompt tweaks but seems to be working good. I'll take care of the error handling, i was waiting for some feedback before putting more effort on this!

@rawnly rawnly force-pushed the feature/add-apple-intelligence-backend branch from 2cecc56 to 8619148 Compare January 20, 2026 11:53
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@Cargo.toml`:
- Around line 65-67: The macOS cfg for the fm-bindings = "0.1.3" dependency is
too broad; restrict or document platform requirements so builds on macOS <26 or
without Apple Intelligence don't fail. Update the target guard around the
fm-bindings dependency (the Cargo.toml block referencing fm-bindings) to a
version-specific cfg such as using cfg(all(target_os = "macos",
target_minimum_os_version = "26")) or otherwise gate the dependency behind a
feature flag, and add clear manifest/readme notes about the minimum macOS
version (macOS 26+) and the Apple Intelligence requirement so consumers know the
MSRV/platform prerequisites.
🧹 Nitpick comments (2)
src/config/providers.rs (1)

20-27: Clarify the empty model/env fields for Apple Intelligence.

Since this is a local provider, the empty strings are likely intentional; a short comment would prevent confusion in future edits.

💡 Optional clarity tweak
     ProviderInfo {
         id: "apple-intelligence",
         provider_type: ProviderType::AppleIntelligence,
         display_name: "Apple Intelligence",
+        // Local provider: no model name or API key required.
         default_model: "",
         env_key: "",
     },
src/provider/mod.rs (1)

215-217: Avoid duplicate provider/model label in Display.

This currently renders “Apple Intelligence (Apple Intelligence)”; consider a shorter model label like “local” or a real model identifier.

💡 Optional tweak
-            ProviderBackend::AppleIntelligence => "Apple Intelligence".to_string(),
+            ProviderBackend::AppleIntelligence => "local".to_string(),

@rawnly
Copy link
Author

rawnly commented Jan 21, 2026

it should be fine now, I added aarch64 target in order to have only M series machines in the compilation target, in this way the build should also pass on older machines.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 6 out of 7 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

lto = true

[target.'cfg(all(target_os = "macos", target_arch = "aarch64"))'.dependencies]
fm-bindings = { version = "0.1.3", optional = true }
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fm-bindings dependency is marked as optional = true but is not included in any feature flag in the [features] section. This means the dependency will not be compiled by default, even on macOS with Apple Silicon. The code that uses fm-bindings is conditionally compiled with cfg(all(target_os = "macos", target_arch = "aarch64")), but the dependency itself won't be available unless explicitly enabled.

To fix this, either:

  1. Remove optional = true to make it a regular platform-specific dependency, OR
  2. Add a feature flag like apple-intelligence = ["fm-bindings"] and update the default features to include it on the appropriate platform.
Suggested change
fm-bindings = { version = "0.1.3", optional = true }
fm-bindings = "0.1.3"

Copilot uses AI. Check for mistakes.
match &self.backend {
#[cfg(all(target_os = "macos", target_arch = "aarch64"))]
ProviderBackend::AppleIntelligence => {
let response = tokio::task::spawn_blocking(move || {
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prompt modification "\ndo not use markdown syntax" is hardcoded in the implementation rather than being part of the AIPrompt building logic. This creates an inconsistency where Apple Intelligence has different prompt behavior than other providers for the same command.

Consider either:

  1. Adding this modification to the AIPrompt building methods if it's needed for all providers, OR
  2. Making this configurable as a provider-specific prompt modifier, OR
  3. Adding a comment explaining why Apple Intelligence specifically needs this modification and why it can't be handled in the prompt building phase.
Suggested change
let response = tokio::task::spawn_blocking(move || {
let response = tokio::task::spawn_blocking(move || {
// NOTE: Apple Intelligence is expected to return plain-text output for CLI usage.
// The underlying fm_bindings::LanguageModelSession does not have a provider-level
// option to disable markdown, so we explicitly request "no markdown" here.
// This modifier is intentionally applied only for the Apple backend instead of
// inside AIPrompt::build_* so that other providers (which can render markdown)
// continue to behave as before.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants