
Uma unesifiso sokusetha i-AI yakho ekhaya noma kumaseva akho ngaphandle kokuthembela kunoma ubani, I-LocalAI isibe ngesinye sezinkomba ezibalulekile ohlelweni lwe-open sourceAkuyona nje enye iphrojekthi: Umndeni wonke wamathuluzi aklanyelwe ukusebenza njengokufaka esikhundleni esiqondile se-OpenAI API namanye amapulatifomu ezentengiselwanokodwa isebenza endaweni, ngokulawula okugcwele idatha yakho futhi kungekho mfuneko ye-GPU ephoqelekile.
Kunokuba nje iseva eyisibonelo, i-LocalAI isiphenduke iseva yemodeli. Ipulatifomu ephelele ye-ejenti, inkumbulo ye-semantic, isizukulwane se-multimodal, kanye nokuthunyelwa okusatshalaliswayoKonke lokhu ngesakhiwo se-modular esivumelana nehadiwe encane kakhulu kanye nengqalasizinda ethuthukisiwe enama-GPU, amaJetson noma amaqoqo asakazwe.
Iyini i-LocalAI futhi kungani kukhulunywa ngayo kangaka?
I-LocalAI iphrojekthi yomthombo ovulekile ngaphansi kwelayisensi ye-MIT esebenza njenge I-REST API iyahambisana nencazelo ye-OpenAI (nezinsizakalo ezifanayo njenge-Anthropic noma i-Elevenlabs)Kodwa isebenza ngokuphelele emshinini wakho noma engqalasizinda yasendaweni. Inakekelwa yi-Ettore Di Giacinto kanye nomphakathi okhuthele kakhulu, futhi isivele iqoqe amashumi ezinkulungwane zezinkanyezi ku-GitHub, okubonisa intshisekelo enkulu ezixazululweni ze-AI ezingenamafu.
Umbono oyinhloko ukuthi ungakwazi Sebenzisa amaklayenti akho, ama-SDK, namathuluzi aklanyelwe i-OpenAI API ngaphandle kokushintsha ikhodiMane nje ukhombe ama-endpoints kusibonelo sakho se-LocalAI. Ukusuka lapho, ungasebenzisa ama-LLM, ukhiqize izithombe nomsindo, usebenzise i-TTS, wenze usesho lwe-semantic, ukutholwa kwezinto, nokuningi, konke endaweni, ngaphandle kokuthumela noma yiluphi ulwazi ngaphandle.
Enye yezinzuzo eziphawuleka kakhulu ukuthi Awudingi ngempela i-GPUAmamodeli amaningi angasebenza ku-CPU kuphela, okuvula ithuba lokuyifaka ku-NAS, i-NUC, iseva endala, noma yimuphi umshini onezinsiza ezincane, okulungisa usayizi kanye nokulinganisa kwamamodeli ngokwemikhawulo yehadiwe yakho.
Umndeni Wesitaki Sendawo: I-LocalAI, i-LocalAGI kanye ne-LocalRecall
Njengoba iphrojekthi ikhula, ishintshe yaba "Umndeni" wamathuluzi axhunyiwe ahlanganisa okungaphezu nje kokucabanga okulula kwemodeliNamuhla, lokho okubizwa ngokuthi "i-Local Stack" kwakhiwe ngokuyinhloko yizingxenye ezintathu ezibalulekile ezingasebenza ndawonye noma ngokwahlukana.
Ngakolunye uhlangothi, i-LocalAI ihlala iyi- insika ephakathi njenge-API ehambisana ne-OpenAI yombhalo, izithombe, umsindo, nezinye izindlelaIphatha ukuxhumana nama-backend ahlukahlukene okucabanga (i-llama.cpp, i-vLLM, ama-transformer, ama-diffuser, njll.) futhi iveza isikhombimsebenzisi esijwayelekile esisekela ingxoxo, ukuqedwa, ukwenziwa kwesithombe, i-TTS, ukushumeka, ukukala kabusha, ngisho nama-endpoints okuhlola afana nombhalo ube yividiyo.
Eceleni kwakhe kuvela i-LocalAGI, esebenza njenge Ipulatifomu yokuphathwa kwe-ejenti ye-AI enokusekelwa okuthuthukisiwe kwamathuluzi e-ejenti kanye nemisebenzi yokusebenzaIsebenza njengokufaka esikhundleni esithuthukisiwe se-OpenAI's Responses API, ikuvumela ukuthi uchaze ama-ejenti angacabanga, ahlele izinyathelo, asebenzise amathuluzi angaphandle, futhi ahlele imisebenzi eyinkimbinkimbi ngokuzimela, kodwa asebenza njalo endaweni.
Isici sesithathu yi-LocalRecall, eklanyelwe njenge I-REST API kanye nohlelo lokuphatha ulwazi olunenkumbulo eqhubekayo yama-ejentiNgokuyisisekelo, inikeza ungqimba lwesitoreji se-semantic, i-vector DB, kanye nokuphathwa komongo wesikhathi eside, ukuze ama-ejenti namamodeli akwazi ukukhumbula ulwazi, amadokhumenti, kanye nezimo zengxoxo ngokuhamba kwesikhathi ngaphandle kokuthembela kumasevisi angaphandle.
Amakhono abalulekile: ngale kwe-LLM elula yendawo
Esinye sezizathu ezenza i-LocalAI ithole ukuthandwa okukhulu kangaka yingoba Akugcini nje ekukhonzeni amamodeli ezilimi ezinkuluLe phrojekthi ihlanganisa ububanzi obukhulu bamakhono e-AI, okwenza kube uhlobo "lwengqalasizinda ejwayelekile" yezinhlelo zokusebenza ezihlakaniphile ezizisingatha zona.
Emkhakheni wolimi, i-LocalAI ivumela sebenzisa ama-LLM ahambisana nemindeni eminingi yamamodeli (Llama, Gemma, Qwen, Phi, Mistral, SmollVLM nabanye), ngokusekelwa kwamamodeli ngefomethi ye-GGUF nge-llama.cpp, noma ngama-backend afana nama-transformer noma i-vLLM, kuye ngehadiwe etholakalayo nezidingo zokusebenza.
Ngokuphathelene nombono we-multimodal kanye nokukhiqiza, i-LocalAI inikeza ukwesekwa kwamamodeli okusabalalisa, ukuhlelwa kwesithombe, amamodeli olimi lokubona kanye nokutholwa kwezinto ngesikhathi sangempelaLokhu kufaka phakathi ukuhlanganiswa namaphrojekthi afana ne-stable-diffusion.cpp, ama-diffuser e-HuggingFace, amamodeli afana ne-FLUX, i-WAN noma i-Qwen 3 VL, kanye ne-API ezinikele yokuthola izinto esekelwa yi-rf-detr, engasebenza kahle kakhulu ngisho naku-CPU.
Umsindo ungenye iphuzu elinamandla: I-LocalAI iyahlanganisa izwi lesikhathi sangempela, umbhalo ube inkulumo, kanye nokuqashelwa kwenkulumo kubuyela emuva ngokukloniphaSithole konke kusukela ku-whisper.cpp kanye ne-faster-whisper yokubhala umbhalo, kuya kuzinjini ze-TTS ezifana ne-Bark, i-Bark-cpp, i-Coqui, i-Kokoro, i-KittenTTS, i-Piper, i-Chatterbox, i-neutts noma i-Vibevoice, kanye namamodeli okuthola umsebenzi wezwi (i-VAD) afana ne-silero-vad ukulawula ukuthi kufanele kukhulunywe nini noma kuncishiswe ukuthula.
Ukwakhiwa kwe-Modular: ama-backend alula kakhulu anama-binary kanye nama-backend adingekayo
Enye yezinguquko ezinkulu zakamuva kule phrojekthi kube ukushintshela ku- ukwakheka kwe-modular ngokugcwele lapho i-binary eyinhloko ye-LocalAI ihlukaniswe khona nama-backendsNgaphambilini, izithombe "zonke-ku-nye" zazizinkulu futhi zazinazo zonke izinjini ezingaba khona njengezijwayelekile, okwaba nzima ukuthunyelwa nokuthuthukiswa okulula.
Ngale filosofi entsha, isithombe se-Docker esiyisisekelo kanye ne-LocalAI binary ziyasebenza. kuncane kakhulu futhi ulande kuphela ama-backend adingekayo uma kudingekaUma ufaka imodeli kusuka kugalari noma ngamafayela e-YAML, i-LocalAI ithola ngokuzenzakalelayo ihadiwe yakho (i-CPU, i-NVIDIA, i-AMD noma i-Intel GPU) bese ilanda uhlobo olufanele lwe-backend oludingwa yimodeli.
Ngaphezu kwalokho, ngenxa yalo mklamo, manje Ungaphatha ama-backend ngokuzimela kusuka kugalari ezinikele, ngisho nokusebenzisa izinguqulo zokuthuthukisa.Lokhu kusho ukuthi akudingeki ulinde ukukhishwa okusha kwe-LocalAI ukuze uzame i-llama.cpp, i-whisper.cpp, noma i-diffuser backend yakamuva: vele ubuyekeze leyo ngxenye bese uhlelo luzoyisebenzisa ngokushesha.
Enye imininingwane ewusizo eyaziswa kakhulu yilabo abasebenza ezindaweni ezikude noma abanezidingo ezithile kakhulu yile ikhono lokulayisha ama-backend ngokwezifiso ngokukopisha ama-binary kufolda ekhethiweNgaphandle kokuhlanganisa kabusha izitsha eziphelele, ungahlola ukwakheka okulungiselelwe, izinhlobo zezakhiwo ezithile, noma ukwakheka okulungisiwe kwama-backend ngaphandle kokuthinta uhlelo lonke.
Ukuhambisana nama-backend amaningi e-AI
I-LocalAI ihlanganisa uhlu olubanzi kakhulu lwama-backend ukuze ihlanganise izinhlobo ezahlukene zamamodeli kanye namacala okusetshenziswa, kanye ukwesekwa kokusheshisa okwenzelwe ihadiwe ngayinyeInhliziyo yama-LLM ivame ukuzungeza i-llama.cpp, i-vLLM kanye nama-transformer, kodwa kunezinye eziningi.
Esigabeni sama-LLM ajwayelekile, i-llama.cpp inikeza Ukuphetha okusebenzayo ku-C/C++ ngokusekelwa kwe-CUDA, i-ROCm, i-Intel SYCL, i-Vulkan, i-Metal, kanye ne-CPU emsulwaokuvumela amamodeli alinganisiwe ukuthi asebenze emishinini engenama-GPU. I-vLLM iletha i-PagedAttention kanye nokwenza ngcono okuqondiswe ku-throughput, ngokusheshisa i-CUDA kanye ne-ROCm, kuyilapho ama-transformer evula umnyango weqoqo elikhulu lamamodeli e-HuggingFace ku-CUDA, ROCm, Intel, kanye ne-CPU.
Ngomsindo, ama-backend afana ne-whisper.cpp kanye ne-faster-whisper ahlanganiswa ukuze Ukuqashelwa kwenkulumo okusheshayo nokuphathekayo ku-CPU noma ku-GPU, kanye nohlu olubanzi lwezinjini ze-TTS: i-Bark ne-Bark-cpp, i-Coqui, i-Kokoro, i-Kitten-TTS, i-Piper, i-Chatterbox, i-neutts kanye ne-Vibevoice, ngayinye inebhalansi yayo yekhwalithi, ukubambezeleka kanye nezidingo zehadiwe, kusukela ku-CPU emsulwa kuya ku-CUDA, i-ROCm, i-Metal noma i-Intel.
Ngokuphathelene nombono nokusabalalisa, iphrojekthi iyasekela stablediffusion.cpp njengokusetshenziswa kwe-C/C++ kwe-Stable Diffusionkanye nomtapo wolwazi we-HuggingFace we-diffusers wokukhiqizwa kwezithombe ezintsha kanye namamodeli okuhlela. Kuye nge-backend, i-CUDA, i-ROCm, i-Intel SYCL, i-Metal, noma i-CPU nje ingasetshenziswa.
Ngale kwama-LLM, umsindo, nezithombe, i-LocalAI iyahlanganisa Ama-backend athile afana ne-rfdetr yokuthola izinto, izinjini zokushintsha irekhodi lamadokhumenti, kanye nesitolo se-vector sesitolo sendawoNgaphezu kwalokho, ihlangana ne-HuggingFace API ukuhlanganisa ukuqonda kwendawo nokude uma kudingeka. Lokhu kwenza ipulatifomu ibe yinkimbinkimbi kakhulu yokwakha izinhlelo zokusesha ezithuthukisiwe, abasizi bokuzulazula kwamadokhumenti, noma amapayipi endawo e-MLOps.
Ukusheshisa: Kusukela ku-CPU-optimised kuya ku-GPU, i-Metal, kanye ne-Jetson
Ukuqinisekisa ukuthi akekho oshiywa ngaphandle, i-LocalAI inikeza ungqimba lwe- Ukusheshisa okuguquguqukayo kakhulu, okunezilungiselelo cishe zanoma yiluphi uhlobo lwehadiwe yesimanjeUma une-NVIDIA GPU, ungasebenzisa i-CUDA 12 noma i-13 kuma-backend amaningi ahambisanayo, kusukela ku-llama.cpp kuya kuma-diffuser noma i-coqui, ulungisa inani lezingqimba ze-GPU noma umthwalo ngokuya ngezinsiza zakho.
Uma kukhulunywa ngamakhadi ehluzo e-AMD, i-LocalAI ithembele ku-ROCm ukuze Sheshisa ama-backend okhiye njenge-llama.cpp, i-whisper, i-vLLM, ama-transformer, ama-diffuser, ama-reranker, kanye nama-TTS ahlukahlukeneLokhu kuyathakazelisa kakhulu kulabo abasetha ama-homelabs ngamakhadi e-Radeon. Kwihadiwe ye-Intel, ukwesekwa kuza nge-oneAPI kanye nobunye ubuchwepheshe, okusebenzisa ukusheshisa kuma-backend afana ne-llama.cpp, i-whisper, i-stablediffusion, i-vLLM, ama-diffuser, i-rfdetr, ama-rebanker, kanye nezinjini zezwi ezifana ne-Coqui noma i-Bark.
Uma usebenza ne-Mac, ipulatifomu ihlangana nama-backend e-Metal kanye ne-MLX kanye ne-MLX-VLM e-Apple, okunikeza Isiphetho esilungiselelwe kuma-chip e-M1, M2, kanye ne-M3+ kokubili kuma-LLM kanye namamodeli e-multimodal, ngaphezu kokusekelwa ku-bark-cpp kanye nezinye izingxenye ezihambisana ne-Metal.
Abakakhohlwa ngezimo ezifakiwe: kukhona ukwesekwa okukhethekile kubo. I-NVIDIA Jetson ene-CUDA 12 kanye ne-13Lokhu kuvumela ukusebenzisa i-llama.cpp, i-whisper, i-stablediffusion, i-diffusers, kanye ne-rfdetr kumadivayisi e-ARM64 afana ne-AGX Orin noma amapulatifomu e-edge computing, okuwusizo kakhulu kumaphrojekthi e-robotics, ezokuphepha, noma e-smart IoT.
Futhi, vele, konke lokhu kuhambisana ne- Ama-executable alungiselelwe i-CPU, ngokusekelwa kwamasethi emiyalelo afana ne-AVX, AVX2, kanye ne-AVX512Ngaphezu kwezinhlobo ze-backend ezifana ne-whisper.cpp ezihlanganiswe ngqo ngokwamakhono eprosesa, zigwema amaphutha "emiyalelo engekho emthethweni" emishinini emidala noma enamandla aphansi.
Ukufakwa: ama-binary, iskripthi, i-Docker kanye ne-AIO
Ezingeni elisebenzayo, ithimba le-LocalAI lifake umzamo omkhulu ekuqinisekiseni ukuthi Ukuyiqalisa akufanele kube yinto exakileKunezindlela eziningana zokufaka kuye ngendawo kanye nezinga lesipiliyoni, kokubili ukuhlolwa okusheshayo kanye nokufakwa okungathi sína.
Ngakolunye uhlangothi, ungaqala nge iskripthi sokufaka esilanda i-binary efanele futhi silungiselele izisekeloAma-binary aqondile akhona futhi kumapulatifomu ahlukahlukene edeskithophu, yize ku-macOS, isibonelo, ama-DMG engasayinwa yi-Apple, okungabangela ukuthi uhlelo luwaphawule ngokuthi "ahlukanisiwe" futhi kudingeke indlela encane yokuwavula (ithimba ligcina izinkinga zokulandelela ngezixazululo kanye nokuthuthukiswa okungenzeka).
Enye indlela evame kakhulu ukusebenzisa i-Docker ukufaka i-LocalAI njenge isitsha esizimele, kungaba sezithombe ze-CPU, i-GPU noma ze-AIO ezinamamodeli alandiwe ngaphambiliniUngakhetha izithombe ze-CPU kuphela, izithombe ze-CPU+GPU ezihlanganisiwe, noma izithombe ze-All-In-One ezifaka isethi yokuqala yamamodeli alungele ukusetshenziswa, yize lezi zokugcina zithatha isikhala esiningi futhi kuye kwaxwayiswa ukuthi esikhathini esizayo ezinye izinhlobo "ezengeziwe" zingase zingasasetshenziswa ukuze kusetshenziswe uhlelo olusha lokuphatha i-backend.
Uma usebenza ne-Docker, kubalulekile ukuhlukanisa phakathi i-docker run, edala futhi iqale isitsha esishaFuthi i-`docker start`, eqala nje ekhona. Uma usuvele uqalise i-LocalAI futhi ufuna ukuyiqala kabusha, indlela efanele ukusebenzisa into efana ne-`docker start -i local-ai` ukugwema ukuphinda izitsha noma ukudala izingxabano namagama abhalisiwe kakade.
Ukulayisha imodeli kanye nokutholwa kwe-backend okuzenzakalelayo
Uma usunayo i-LocalAI esebenzayo, isinyathelo esilandelayo yilesi Layisha amamodeli ozowasebenzisa, kungaba kusuka kugalari esemthethweni noma usebenzisa amafayela okucushwa kwe-YAML.Lesi yisigaba lapho kusetshenziswa khona ingqondo yehadiwe ezenzakalelayo kanye nokutholwa kwe-backend.
Uma ukhetha imodeli ku-WebUI noma uchaza eyodwa ku-YAML, i-LocalAI Hlaziya amakhono omshini wakho (uhlobo lwe-GPU, kungaba yi-NVIDIA, i-AMD, noma i-Intel, ukwesekwa kwe-CPU, njll.) bese ulanda i-backend efanele. ngaleyo modeli kanye nenhlanganisela yedivayisi. Ngale ndlela, ugwema ukuthola ngesandla ukuthi iyiphi i-llama.cpp, ama-diffuser, noma i-whisper.cpp binary oyidingayo endaweni yakho ethile.
Uma udinga ukulawula okwengeziwe, ukucushwa kwe-YAML kukuvumela Lungisa amapharamitha afana nosayizi womongo, inani lezingqimba ze-GPU, ukusetshenziswa kwe-mmap, ama-quantization, noma incazelo yamathuluzi e-ejentiFuthi, ngenxa yokuvuselelwa kwe-WebUI, manje kungenzeka ukuhlela yonke i-YAML ngqo kusuka ku-interface yesithombe ngaphandle kokufaka i-SSH kuseva noma ukuhlela amafayela ngesandla.
I-WebUI eklanywe kabusha: Ukuphathwa okubonakalayo kwamamodeli, ingxoxo, kanye nama-ejenti ku-LocalAI
I-interface yewebhu ihlelwe kabusha kakhulu eqondiswe kubasebenzisi abathuthukile ngenkathi itholakala kalula kulabo abafuna nje ukuhlola ngokubona. Ukufuduka kusuka ku-HTML kuya ekuhlanganisweni kwe I-Alpine.js kanye ne-JavaScript yomdabu zithuthukise kakhulu isivinini kanye nokugeleza. kusukela kokuhlangenwe nakho, ikakhulukazi ezindaweni ezinezilungiselelo eziningi noma amamodeli.
Kusukela kule WebUI ungafinyelela izixhumanisi zengxoxo, ukwenziwa kwezithombe, umsindo, ukuphathwa kwamamodeli, kanye nokucushwa kwangaphakathiKukhona uhlu lwamamodeli anokusesha okungacacile, ukuze noma ngabe wenza iphutha lapho uthayipha (isibonelo, "gema" esikhundleni se-"gemma"), uhlelo luzokukhombisa imiphumela efanele ngaphandle kokuthi uhlanye uzama ukulungisa igama eliqondile.
Elinye lamaphuzu awusizo kakhulu ukuthi i-WebUI ivumela Buka futhi uhlele ukucushwa okuphelele kwe-YAML kwemodeli ngayinye Kusuka kusiphequluli, ngaphandle kokushiya uhlelo lokusebenza. Lapho ungashintsha umongo ophezulu, unike amandla noma ukhubaze ukwesekwa kwe-multimodal, ulungise amapharamitha okusebenza, noma uchaze amathuluzi namaseva e-MCP ama-ejenti, konke kusebenza ngokushesha uma usulondoloze izinguquko.
Ama-Ejenti kanye Nokusekelwa kwe-MCP: I-AI esebenzisa amathuluzi endaweni
Ezinguqulweni zayo zakamuva, i-LocalAI ithathe igxathu elikhulu ngokufaka Ukusekelwa okugcwele kwe-Protocol Context Model (PCM) kanye namakhono e-ejenti athuthukileLokhu kuvumela ukwakhiwa kwama-ejenti angaphenduli imibuzo kuphela, kodwa futhi angasebenzisa amathuluzi angaphandle, ahlele izinyathelo, futhi ahlele imisebenzi eyinkimbinkimbi.
Ukuhlanganiswa kwe-MCP kusekelwe ohlakeni oluthuthukiswe yi-LocalAGI kanye namaphrojekthi ahlobene nawo njenge-Cogito, okuholela endleleni elula yokwenza kanjalo. Chaza “amaseva e-MCP” njengeziqukathi noma izinsizakalo zangaphandle eziveza amathuluziIsibonelo, ungaba neseva ye-MCP eyenza usesho ku-DuckDuckGo, enye ebuza ama-API angaphakathi enkampani yakho, noma enye esebenzisa izikripthi emshinini wakho wendawo.
Ngokombono wonjiniyela, kwanele Lungiselela lawa maseva e-MCP ku-YAML yemodeli, ngaphandle kwesidingo sokubhala ikhodi ye-Python noma ukusebenzisa amalabhulali athileUma usulungisiwe, ungasebenzisa i-endpoint ye-/mcp/v1/chat/completions, ehambisana ne-OpenAI API, noma usebenzise ngokuqondile i-"MCP Agent Mode" kusuka ku-WebUI yengxoxo ukuze imodeli iqale ukusebenzisa amathuluzi lapho ibona kudingeka.
Ithimba liphinde latshala umzamo ku Thuthukisa ukuqina kwezingcingo zomsebenzi kanye nokuphathwa kwezinhlelo ze-JSONLokhu kulungisa amaphutha kanye nokwesaba okungenzeka lapho amamodeli ekhiqiza izincazelo zamathuluzi ezingaphelele. Ngalezi zithuthukisi, ukusetshenziswa kwamathuluzi kanye nomsebenzi we-ejenti kuzinzile kakhulu ekukhiqizeni.
Imephu yendlela ye-LocalAI kanye nokuguquka okuqhubekayo kwephrojekthi
I-LocalAI ihamba ngokushesha okukhulu, nge imephu yomphakathi ngendlela yezinkinga eziphawuliwe lapho ungalandela khona izibuyekezo zakamuva kanye nalezo ezihlelelwe izinyanga ezizayo. Imephu yendlela ikhombisa ukulandelana okuqhubekayo kwentuthuko ehlanganisa kokubili amakhono amasha kanye nokuthuthukiswa kwangaphakathi.
Eminyakeni yamuva nje, okulandelayo kuye kwengezwa Izici ezifana nokucabanga okusatshalaliswa, imodi ehlanganisiwe, i-P2P yokusebenzisa ama-LLM kunethiwekhi, amadeshibhodi okuphatha ama-instance swarms, kanye nokusekelwa kwamamodeli amasha nama-backend. (I-Flux, i-MLX-Audio, i-WAN, i-SANA, i-Bark.cpp, i-stablediffusion.cpp, njll.), kanye ne-Reranker API kanye ne-API yokutholwa kwezinto ehlanganisiwe.
Kuye kwenzeka nezenzakalo ezibalulekile, njenge- Ukufuduka kwazo zonke iziphetho zisuka ku-binary eyinhloko ukuze kwehliswe isisindoUkufika kwesiqalisi esisha se-macOS ne-Linux, ukuthuthukiswa okuqhubekayo kwe-WebUI, kanye nokwengezwa kwama-API okuhlola njengombhalo-kuya-kuvidiyo nge-/v1/videos, exhuma kumathuluzi e-AI endawo njengokuhlela ividiyo endawo, konke kuyingxenye yemephu yendlela. Amacebo esikhathi esizayo afaka ukuphathwa kwememori okunamandla kakhulu, ukusekelwa okuthuthukisiwe kwe-multi-GPU, ukuhlanganiswa okusha kwe-ejenti, kanye nesistimu yamathuluzi e-MCP enwetshiwe.
Izibonelo zokusetshenziswa emphakathini kanye nohlelo lokusebenza lweselula I-Local AI Chatbot
Umoya we-LocalAI uxhumene kakhulu nomphakathi, njengoba kuboniswe ku okuthunyelwe komdali ngokwakhe ezinkundleni ezifana ne-r/selfhosted noma/LocalLLaMALesi sithangami yilapho ukuvela kwezakhiwo kwabelwana khona mathupha futhi imibuzo yabasebenzisi iyaphendulwa. Imibono eminingi igxile endleleni yokuhlanganisa i-LocalAI "njengobuchopho" obuyimfihlo bokwenza okuzenzakalelayo namaphrojekthi omuntu siqu.
Esinye sezimo ezibonisa indlela "yokwenza konke kube ngokwendawo" ukuvela kwe Izinhlelo zokusebenza zeselula ezifana ne-Local AI Chatbot evela ku-Software Tailor, ezinikeza ingxoxo namamodeli athuthukile ngqo kudivayisi ngaphandle kokuxhumeka kwe-inthanethiLolu hlelo lokusebenza likuvumela ukuthi ukhulume namamodeli afana ne-DeepSeek R1, Qwen, Mistral, Llama 3 noma i-Phi ungaxhunyiwe ku-inthanethi ngokuphelele, ulondoloze ubumfihlo obuyi-100% futhi usebenzise ihadiwe yefoni.
Phakathi kwezinto zayo kukhona Ukusekelwa kwamamodeli amaningi anokushintshana okusheshayo phakathi kwawo, umklamo ogxile ekusebenziseni izinsiza ngendlela ephumelelayo, kanye nesixhumi esibonakalayo esihlanzekile sokuxoxa ngaphandle kokuphazamiseka.Ihloselwe abasebenzisi abaqaphela ubumfihlo, ochwepheshe abaphatha ulwazi olubucayi, abantu ezindaweni ezinokuxhumana okungekuhle, kanye nabantu abathanda i-AI abanesifiso sokuzama amamodeli endawo.
Lezi zinhlobo zezixazululo zibonisa ukuthi uhlelo lwe-ecosystem oluzungeze i-LocalAI kanye ne-AI yendawo ludlula kanjani iseva eyinhloko, ukuletha ifilosofi "yakho konke kudivayisi yakho" kumaselula, kumadeskithophu, nakwamanye amafomethi, ngenhloso yokuthi noma ubani akwazi ukujabulela abasizi abathuthukile ngaphandle kokuthembela kumasevisi akude.
Iphrojekthi ye-LocalAI kanye nomndeni wayo wamathuluzi kubonisa ukuthi ingenziwa kanjani Ukwakha inqwaba ephelele ye-AI yangasese, eyandisiwe, ye-modular, kanye neye-multimodal, ekwazi ukumboza konke kusukela engxoxweni elula kuya kuma-ejenti ayinkimbinkimbi anememori namathuluzi, ngaphandle kokulahla inkululeko yesofthiwe yamahhala noma ukulawula okuphelele kwedatha., izibeka njengenye indlela engathi sína kakhulu kulabo abangafuni ukuthi ubuhlakani bokwenziwa bamaphrojekthi abo buthembele kubantu besithathu.
