Conversation
📝 WalkthroughWalkthroughAdds macOS (aarch64) Apple Intelligence support: a target-specific fm-bindings dependency, a gated ProviderType and provider entry, configure command branching for macOS providers, and a provider backend using fm_bindings::LanguageModelSession to produce completions. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant LumenProvider
participant LanguageModelSession
participant macOS_runtime as macOS
Client->>LumenProvider: request completion (AppleIntelligence)
LumenProvider->>LumenProvider: spawn blocking task
LumenProvider->>LanguageModelSession: create session (system + user prompts)
LanguageModelSession->>macOS_runtime: call fm_bindings runtime
macOS_runtime-->>LanguageModelSession: return completion
LanguageModelSession-->>LumenProvider: deliver response
LumenProvider-->>Client: return completion result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
23a0f6d to
ef14b90
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @Cargo.toml:
- Line 53: Remove fm-bindings from the unconditional [dependencies] section and
add it under a macOS-only target-specific dependencies block so it only builds
on macOS; specifically delete the line "fm-bindings = \"0.1.3\"" from the main
dependencies and add a new section [target.'cfg(target_os =
\"macos\")'.dependencies] with fm-bindings = "0.1.3" (the crate is only used
behind #[cfg(target_os = "macos")] guards in the code).
In @src/provider/mod.rs:
- Around line 161-173: The code is calling blocking fm-bindings APIs directly in
async code: wrap the synchronous LanguageModelSession::with_instructions(...)
and the subsequent session.response(...) calls inside
tokio::task::spawn_blocking to offload them to the blocking thread pool (use
.await on the JoinHandle and map errors to ProviderError::FmError), and ensure
you move needed values (e.g., the constructed system_prompt and
prompt.user_prompt) into the closure; also remove or make the hardcoded "\ndo
not use markdown syntax" suffix configurable (e.g., derive it from prompt or a
flag) instead of always appending it in the ProviderBackend::AppleIntelligence
branch.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (5)
Cargo.tomlsrc/command/configure.rssrc/config/cli.rssrc/config/providers.rssrc/provider/mod.rs
🔇 Additional comments (10)
src/config/cli.rs (2)
53-54: LGTM!The
AppleIntelligencevariant is correctly gated with#[cfg(target_os = "macos")], ensuring it's only available on macOS builds.
72-73: LGTM!The
FromStrimplementation correctly parses"apple-intelligence"with consistent macOS gating.src/command/configure.rs (2)
65-76: LGTM!The provider-specific messaging for local providers (Apple Intelligence and Ollama) is well-structured with appropriate macOS gating.
99-102: LGTM!Early-exiting for Apple Intelligence makes sense since it doesn't support model selection.
src/config/providers.rs (1)
20-27: LGTM!The Apple Intelligence provider entry is correctly configured with empty
env_key(no API key needed) and emptydefault_model(not user-selectable). The#[cfg(target_os = "macos")]guard ensures this entry only exists on macOS builds.src/provider/mod.rs (5)
1-2: LGTM!The conditional import for
LanguageModelSessionis correctly gated for macOS.
20-22: LGTM!The error variant properly wraps
fm_bindings::Errorwith the#[from]attribute for ergonomic error conversion.
34-41: LGTM!The
AppleIntelligencevariant is a unit variant (no associated data), which is appropriate since no client configuration is needed for local inference.
63-71: LGTM!The initialization correctly returns the
AppleIntelligencebackend with the display name from provider info.
208-209: LGTM!Returning
"Apple Intelligence"as the model name for display purposes is appropriate since there's no user-selectable model.
365f90d to
6e65847
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In @src/provider/mod.rs:
- Around line 161-177: The AppleIntelligence branch currently uses .expect() on
spawn_blocking().await which can panic; add a new ProviderError variant (e.g.,
TaskJoinError(JoinError)) to ProviderError, update the
ProviderBackend::AppleIntelligence block to convert the JoinError into that
ProviderError instead of calling .expect(), and propagate that error (use
.await.map_err(|e| ProviderError::TaskJoinError(e))? or equivalent) so the
JoinError is mapped into ProviderError and returned rather than panicking.
🧹 Nitpick comments (2)
src/provider/mod.rs (2)
20-22: Consider a more specific error message.The error message "AI request failed: {0}" is identical to
GenAIError. Consider differentiating it for easier debugging, e.g., "Apple Intelligence request failed: {0}".Suggested change
#[cfg(target_os = "macos")] - #[error("AI request failed: {0}")] + #[error("Apple Intelligence request failed: {0}")] FmError(#[from] fm_bindings::Error),
209-215: Consider the resulting display format.Since
provider_nameis also set to "Apple Intelligence" (fromdefaults.display_name), theDisplayimpl on line 220 will output "Apple Intelligence (Apple Intelligence)" which is redundant. Consider returning something like "Local Model" or an empty string for the model.Suggested change
#[cfg(target_os = "macos")] - ProviderBackend::AppleIntelligence => "Apple Intelligence".to_string(), + ProviderBackend::AppleIntelligence => "Local Model".to_string(),
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
Cargo.tomlsrc/provider/mod.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- Cargo.toml
🧰 Additional context used
🧬 Code graph analysis (1)
src/provider/mod.rs (1)
src/config/providers.rs (1)
for_provider(102-107)
🔇 Additional comments (3)
src/provider/mod.rs (3)
1-2: LGTM!The conditional import is correctly gated to macOS only, consistent with the platform-specific nature of Apple Intelligence.
34-42: LGTM!The
AppleIntelligencevariant is appropriately added as a unit variant since it doesn't require API keys or model configuration. The conditional compilation is correctly applied.
63-71: LGTM!The implementation correctly follows the existing pattern. Ignoring
api_keyandmodelparameters is appropriate since Apple Intelligence uses the local model without external configuration.ProviderType::AppleIntelligenceis properly defined inALL_PROVIDERS, soProviderInfo::for_providerwill locate it without issue.
|
This is super interesting. I'll take some time to test it out this weekend and might add some changes. |
|
It might need some prompt tweaks but seems to be working good. I'll take care of the error handling, i was waiting for some feedback before putting more effort on this! |
2cecc56 to
8619148
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@Cargo.toml`:
- Around line 65-67: The macOS cfg for the fm-bindings = "0.1.3" dependency is
too broad; restrict or document platform requirements so builds on macOS <26 or
without Apple Intelligence don't fail. Update the target guard around the
fm-bindings dependency (the Cargo.toml block referencing fm-bindings) to a
version-specific cfg such as using cfg(all(target_os = "macos",
target_minimum_os_version = "26")) or otherwise gate the dependency behind a
feature flag, and add clear manifest/readme notes about the minimum macOS
version (macOS 26+) and the Apple Intelligence requirement so consumers know the
MSRV/platform prerequisites.
🧹 Nitpick comments (2)
src/config/providers.rs (1)
20-27: Clarify the empty model/env fields for Apple Intelligence.Since this is a local provider, the empty strings are likely intentional; a short comment would prevent confusion in future edits.
💡 Optional clarity tweak
ProviderInfo { id: "apple-intelligence", provider_type: ProviderType::AppleIntelligence, display_name: "Apple Intelligence", + // Local provider: no model name or API key required. default_model: "", env_key: "", },src/provider/mod.rs (1)
215-217: Avoid duplicate provider/model label in Display.This currently renders “Apple Intelligence (Apple Intelligence)”; consider a shorter model label like “local” or a real model identifier.
💡 Optional tweak
- ProviderBackend::AppleIntelligence => "Apple Intelligence".to_string(), + ProviderBackend::AppleIntelligence => "local".to_string(),
|
it should be fine now, I added |
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 6 out of 7 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| lto = true | ||
|
|
||
| [target.'cfg(all(target_os = "macos", target_arch = "aarch64"))'.dependencies] | ||
| fm-bindings = { version = "0.1.3", optional = true } |
There was a problem hiding this comment.
The fm-bindings dependency is marked as optional = true but is not included in any feature flag in the [features] section. This means the dependency will not be compiled by default, even on macOS with Apple Silicon. The code that uses fm-bindings is conditionally compiled with cfg(all(target_os = "macos", target_arch = "aarch64")), but the dependency itself won't be available unless explicitly enabled.
To fix this, either:
- Remove
optional = trueto make it a regular platform-specific dependency, OR - Add a feature flag like
apple-intelligence = ["fm-bindings"]and update the default features to include it on the appropriate platform.
| fm-bindings = { version = "0.1.3", optional = true } | |
| fm-bindings = "0.1.3" |
| match &self.backend { | ||
| #[cfg(all(target_os = "macos", target_arch = "aarch64"))] | ||
| ProviderBackend::AppleIntelligence => { | ||
| let response = tokio::task::spawn_blocking(move || { |
There was a problem hiding this comment.
The prompt modification "\ndo not use markdown syntax" is hardcoded in the implementation rather than being part of the AIPrompt building logic. This creates an inconsistency where Apple Intelligence has different prompt behavior than other providers for the same command.
Consider either:
- Adding this modification to the AIPrompt building methods if it's needed for all providers, OR
- Making this configurable as a provider-specific prompt modifier, OR
- Adding a comment explaining why Apple Intelligence specifically needs this modification and why it can't be handled in the prompt building phase.
| let response = tokio::task::spawn_blocking(move || { | |
| let response = tokio::task::spawn_blocking(move || { | |
| // NOTE: Apple Intelligence is expected to return plain-text output for CLI usage. | |
| // The underlying fm_bindings::LanguageModelSession does not have a provider-level | |
| // option to disable markdown, so we explicitly request "no markdown" here. | |
| // This modifier is intentionally applied only for the Apple backend instead of | |
| // inside AIPrompt::build_* so that other providers (which can render markdown) | |
| // continue to behave as before. |
This pull-request introduces support for apple-intelligence as local ai. This is done by adding a new backend and using fm-bindings to interact with the local model.
I think it needs some tweaks on the prompt, the quality of the output it's not exactly the best atm. I had to perform some workarounds to avoid it printing raw markdown in the commit
Summary by CodeRabbit
New Features
Bug Fixes / UX
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.