Introduction
UkuhlolwaOkokuqala, ufuna ukuxhumana ne-ChatGPT nge-API, thina amayunithi amabili we-context, futhi ungathanda ukuthi inikezela kokubili. Emva kwalokho uzodinga ukwenza into enhle. Emva kwalokho - ukwenza kahle. Ekugcineni - ukwenza ngaphandle kwami.
Ngokwenza lokhu, i-Agent isekelwe.
Uma uneminyaka eminyakeni elidlulile iholide ama-agents e-scripts ne-wrappers, i-experimenting ne-tinkering, futhi ungathanda indlela enhle kakhulu, enhle yokwakha kwabo - le nqaku iyiphi na. Ngitholela nge-repo kanye ne-forums, ngokuvamile ngitholile, "How are others doing it?" > Ngitholile okuhlobene - okuhlobene ngokuvamile ngemuva kokusebenzisa okwenziwe, futhi ngokuvamile i-destilled a set of core principles for turning a cool idea into a production-ready solution.
Yenziwe njenge-cheat sheet esebenzayo - isithombe se-engineering principles eziza kuholele i-agent kusuka ku-sandbox kuya ekukhiqizeni: ukusuka ku-API eyodwa kuya ku-stable, i-controlable, ne-scalable system.
Disclaimer
NgenaUmhlahlandlela(Ukuvelisa ama-agents asebenzayo), I-Anthropic ivame i-agent njenge-system lapho i-LLM isilawule ngokushesha izinhlelo zayo zayo zayo nezinsizakalo, ukugcina ukulawula indlela yokusebenza izinsiza. I-Systems lapho i-LLM kanye nezinsizakalo zihlanganiswa ngokusebenzisa izindlela ze-code ziye ziye zithi Workflows. Zonke zihlanganisa inkampani ephelele kakhulu - Ama-Agent Systems.
Kule nqaku, i-Agent = i-Agent uhlelo, lapho ngenxa ye-stability ne-control ngama-Workflows. Ngingathanda ukuthi emva kokubili kunyaka elizayo kuya kuba i-1-1 ama-turns ezininzi ye-evolution futhi ama-Agents eyenziwe ngokuvamile, kodwa manje lokhu akuyona
1. Design the Foundation
I-Design ye-FoundationIzinguquko ezidlulile ze-agents zihlanganisa ngokushesha: izici ezimbalwa, izincazelo ezimbalwa - futhi hey, kusebenza.
Izinguquko ezidlulile ze-agents zihlanganisa ngokushesha: izici ezimbalwa, izincazelo ezimbalwa - futhi hey, kusebenza.
“If it works, why make it complicated?”
Okokuqala, konkeUkubonisaI-agent isabela, isebenza ikhodi, isebenza ngokufanelekayo. Kodwa uma ushiye imodeli, ukuguqulwa kwekhwalithi, noma uxhumane isixhumanisi esitsha - ngokushesha kubangelwa ebonakalayo, engathengiswa, ezinzima ukuguqulwa.
Futhi ngokuvamile, umdla wokugqibela akuyona emangalisayo noma imibuzo, kodwa ngempumelelo kakhulu: ukulawulwa kwe-memory, umphathi we-hardcoded, akukho indlela yokuvumela izivakashi, noma isikhunta esisodwa se-rigid.
Kulesi isigaba ivela ngezinyathelo ezine eziphambili eziza kukunceda ukwakha isakhiwo esiyingqayizivele - eyodwa ukuthi konke okungenani kungenziwa ngokushesha.
1. Keep State Outside
Problem:
- Uma idivayisi ivimbele (ukushuka, isikhathi eside, noma yini), kufanele kube lula ukujabulela esifanele lapho ivimbele.
- I-Reproductibility iyimfuneko. Ufuna indlela yokuguqulwa ngokunemba okufanayo - ukuhlolwa, ukucubungula nezinye izinzuzo.
Kuyinto engapheliyo kakhulu, kodwa kunjalo:
- Parallelization. Okusheshayo noma ngempumelelo uzodinga ukuhanjiswa logic of the agent. Mhlawumbe kufuneka ukuhanjiswa izinketho ezininzi phakathi-dialogue (“Uke of these is better?”). Mhlawumbe uzodinga ukuba uxhumane. Who knows – uzodinga.
(I-Memory iyinkinga elilodwa enhle - siza kufike kulabo ngokushesha)
Solution:Ukusuka StateoutsideI-agent - ku-database, i-cache, i-storage layer - ngisho ifayela le-JSON uzothola.
Checklist:
- I-agent ingasungulwa kusukela eminyakeni eyodwa, nge-session_id kuphela - i-session identifier - kanye ne-state external (isib. Izinyathelo ezihlaziywa ku-database noma ifayela le-JSON). Kwi-stage eyodwa, ungasungula umsebenzi we-agent, ukuguqulwa kwalo (ngaphandle kokuguqulwa kwezinye izinto ngaphansi kwe-cap), futhi iyasebenza njenge-nothing happened.
- I-Test Case: I-agent e-interrupted akuyona inkulumo, emva kokufaka okungenani, imiphumela enhle
- I-Status iyahambisana ngexesha elandelayo ngaphandle kokubili umsebenzi
- Ungahambisa idatha ku-instances eziningana ngokubambisana phakathi kwe-dialogue
2. Make Knowledge External
Problem: I-LLM ayikho. Ngaphansi kwe-session eyodwa, i-model ingasiza into etholakalayo, ukuxuba izigaba, ukuphazamiseka kwengxowa, noma ukuqala "ukushintshwa" ama-details asikho. Futhi kubonakala ukuthi isikhathi eside, ifenisha le-context ikakhulu futhi ikakhulu, okuphendula nathi nezimfuneko ezintsha. I-LinkedIn iyatholakala emaphaketheni lapho abantu zihlanganisa okuyinto ibhokisi noma iiyure amanye ye-YouTube yevidiyo ekubeni inguqulo entsha ye-model. Kodwa kunjalo, i-LLMs akufanele ukunakekelwa futhi kufanele uxhumane.
ikakhulukazi uma:
- I-Dialogue ye-Long
- Izici zihlanganisa
- izicelo zihlanganisa
- futhi i-tokens ayikho endless
Ngaphandle kwe-context windows (i-8k, i-16k, i-128k ...), izimo zihambe:
- "I-Lost In The Middle" - imodeli ibonise ukunakekelwa kokuqala nokugqibela (ngokukwazi ukunciphisa imibala kusuka ku-middle)
- Izindleko - more tokens = ngaphezulu imali;
- Futhi akuyona akuyona. Lokhu kubalulekile ukuba kunikeza isisindo, ukucindezeleka, noma i-hallucinations. Uma ama-transformers asebenza ku-self-attention nge-complexity square (O(n2)), lokhu ukucindezeleka uzothola nathi.
Solution:Ukuhlukanisa "ukudluliselwa" kusuka "ukudluliselwa" - njengezinhlelo ze-computing ezivamile. I-agent kufanele kube lula ukusebenza nge-ukudluliselwa ye-external: ukugcinwa, ukubuyekeza, ukuphefumula, nokuvakasha ulwazi ngaphandle kwe-model. Kukhona izindlela eziningana zokusebenza, futhi zonke zihlanganisa izindawo zayo.
Approaches
Memory Buffer
Ukubuyekeza LastkIzindaba. Hlola prototyping ngokushesha.
+Easy, fast, kulula izicelo ezincinane
-ukuphazamiseka ulwazi ebalulekile, akufanele, akufanele "ngesamuva"
Summarization Memory
I-History Compresses ukuze ifakwe ngaphezulu.
+Token savings, ukubuyekeza ikhono
-ukucindezeleka, ukucindezeleka kwezingane, imiphumela e-multi-step compression
RAG (Retrieval-Augmented Generation)
Ukukhuthaza ulwazi kusuka ku-databases ezingenalutho. Okuningi isikhathi uzothola lapha.
+Scalable, okusheshayo, okuhlolwa
-ukucubungula okuhlobene, sensitive ukubuyekeza izinga, ukucubungula
Knowledge Graphs
Ukuxhumanisa ukuxhumanisa phakathi kwezinto namafakelo. Ngokuvamile elegance, sexy futhi ezinzima, uzothola RAG nangona kunjalo.
+logic, ukucaciswa, ukuzinza
-izinga eliphezulu sokufika, ukucindezeleka kwama-LLM
Checklist:
- All conversation history is accessible in one place (outside the prompt)
- Izinto ezaziwayo zithunyelwe futhi zithunyelwe
- I-Historic Scales ngaphandle kokungabikho kokuphumelela kwe-context window
3. Model as a Config
Problem:I-LLM iyahluka ngokushesha; I-Google, i-Anthropic, i-OpenAI, njll zithunyelwe ngokushesha ama-updates, zihlanganisa phakathi kwamanye amazwe ahlukahlukene. Lokhu kubalulekile kwethu njengama-engineers, futhi singathanda ukufinyelela kakhulu. I-agents yethu kufanele kube lula ukuguqulwa kwama-model engcono (noma ngokugcwele, engcono) ngokushesha.
Solution:
- Implement model_id configuration: Usebenzisa i-parameter model_id kuma-file ye-configuration noma ama-environment variables ukucacisa isampula esebenzayo.
- Ukusetshenziswa kwe-abstract interfaces: Yenza i-interfaces noma izigaba ze-wrapper ezivela kuma-models ngokusebenzisa i-API eyodwa.
- Apply middleware solutions (carefully—we'll talk about frameworks a bit later)
Checklist:
- Ukushintshwa kwe-model akugqithisa inkinobho yebhizinisi futhi akugqithisa ukusebenza kwe-agent, ukuguqulwa kwe-orchestration, ingcindezi, noma izixhobo
- Ukongeza imodeli entsha kuncike kuphela ukulawula futhi, okungenani, i-adapter (i-layer elula enikeza imodeli entsha ku-interface efunekayo)
- Ungahambisa ama-models ngokushesha futhi kalula. I-Ideal - noma iyiphi i-models, okungenani-ukudlulisela ngaphakathi kwe-model family
4. One Agent, Many Interfaces: Be Where the Users Are
Problem:Ngisho ukuthi ekuqaleni i-agent iyatholakala ukuba i-interface ye-communication kuphela (isib. I-UI), ungathanda ukunikeza abasebenzisi ukunambitheka nokufanele ngokongeza ukuxhumana ngokusebenzisa i-Slack, i-WhatsApp, noma, ngithanda ukuthi, i-SMS - noma yini. I-API ingangena ku-CLI (noma uzodinga enye yokuthintela). Yenza lokhu ku-designing yakho kusukela ekuqaleni; ukwenza kungcono ukusebenzisa i-agent yakho lapho iyatholakala.
Solution: Creating a unified input contractUkulungiselela i-API noma inqubo ye-API esebenza njenge-interface ye-universal ye-channels. Storage i-channel interaction logic ngokulinganayo.
Checklist:
-
Agent is callable from CLI, API, UI
-
All input goes through a single endpoint/parser/schema
-
All interfaces use the same input format
-
No channel contains business logic
-
Adding a new channel = only an adapter, no changes to core
I-I-Define Agents Ukusebenza
Nangona kunezinto eyodwa, konke kulula, njenge-posts ze-AI evangelists. Kodwa uma uchungechunge izixhobo, isisombululo sokuqala, kanye nezinyathelo ezininzi, umeluleki uye kwangaphakathi.
Nangona kunezinto eyodwa, konke kulula, njenge-posts ze-AI evangelists. Kodwa uma uchungechunge izixhobo, isisombululo sokuqala, kanye nezinyathelo ezininzi, umeluleki uye kwangaphakathi.
Yenza i-track, ayikwazi ukwenza kanjani imiphumela, ayizange ukubiza isixhobo olufanelekayo - futhi ushiye nje nge-logs lapho "okungcono, konke kubonakala ibhalwe khona."
Ukuze ukunceda okuhle, umphathi kufuneka uxhumanebehavioral model: lokho ibhizinisi, izixhobo zayo, umuntu ukuthatha izixazululo, indlela abantu ukuxhaswa, futhi yini ukwenza uma izinto zihlala.
Kulesi isigaba esifundisa izinsizakalo eziza kukunceda ukunikezela inqubo yakho yokusebenza ngokuhambisana, ngaphandle kokufuna "umzila uzothola ngokwenene."
5. Design for Tool Use
Problem:Lesi siphuzu ingangena okuhlobene, kodwa ungathanda ama-agents eyenziwe nge-"Plain Prompting + Raw LLM output parsing." Kuyinto njengokufunda ukulawula i-mechanism enhle ngokucindezela ama-strings ebhokisini kanye nokuvumelana okungenani. Uma i-LLMs ivumela umbhalo olulodwa okuhlobene ukuthi siza kuhlobisa nge-regex noma i-string methods, sincoma:
- I-brittleness: Ukuguqulwa okungenani kwegama le-LLM ye-response (u-word eyongezelelwe, ukuguqulwa kwe-phrase) ingakwazi ukuchithwa yonke ukucubungula. Lokhu kuholele ku- "i-arms race" ephelele phakathi kwe-code yakho yokucubungula ne-model unpredictability.
- I-ambiguity: I-language ye-Natural i-inherently ambiguous. Yintoni ebonakalayo kumadoda akuyona i-puzzle ye-analyzer. "Call John Smith" - eyona ye-John Smith e-database yakho? Yini inombolo yayo?
- Ukulungiselela ukucindezeleka: Ukucindezeleka ikhodi kwandisa, kubaluleke futhi kubaluleke ukucindezeleka. Yonke i-agent entsha "umthamo" inikeza ukucindezeleka izinhlelo ezintsha.
- Izinzuzo ezincinane: Kubalulekile ukwenza imodeli ngokuvumelana ukulanda izixhobo eziningana noma ukunikezela izakhiwo ze-data ezinzima nge-output ye-text elula.
Solution:I-model ivumela i-JSON (noma enye i-format eyenziwe) — uhlelo ivumela.
Umqondo wokufunda lapha kuyinto ukunikezela inkulumoUkucaciswaumsebenzisi Intention futhiUkukhethaizixhobo zokusebenza ku-LLM, ngaphandle kokuzihlanganisaUkukhuthazaUkusuka ku-intentUkusebenzangokusebenzisa interface ephelele.
Ngesikhathi eside, ngokuvamile zonke ama-providers (i-OpenAI, i-Google, i-Anthropic, noma wonke umntu owaziwa) usekela i-so-called AI."function calling"noma umthamo ukukhiqiza output ku-JSON ifomati ebonakalayo.
Ukubuyekeza kanjani lokhu kusebenza:
- Ukucaciswa kwezixhobo: Uyakwazi ukucacisa umsebenzi (izixhobo) njenge-JSON Schema nge-imeyili, isifinyezo, ama-parameter. Ukucaciswa kubalulekile kakhulu - umzila uye uye.
- Ukuhambisa ku-LLM: Ngezinye ukuhambisa, imodeli inatholakala izicathulo zokusebenza kanye ne-prompt.
- Model output: Instead of text, the model returns JSON with:
- name of the function to call
- arguments—parameters according to schema
- Ukusebenza: Ikhodi ivumela i-JSON futhi ivumela isicelo esifanayo nge-parameter.
- Ukusabela Model (i-optional): Ukusabela umphumela kubhalwe ku-LLM yokusabela lokugqibela.
Important:Ukucaciswa kwezixhobo zihlanganisa futhi zihlanganisa. Ukucaciswa okuhlobene = ukwahlukanisa isicelo esisodwa.
What to do without function calling?
Uma imodeli akunakekela izivakashi ezisebenzayo noma ufuna ukunceda ngezifiso:
- Qiniseka imodeli ukuguqulwa i-JSON ku-prompt. Qiniseka ukucacisa ifomu; ungakwazi ukongeza izibonelo.
- Thola impendulo futhi ubhalisele nge-something like Pydantic. Kukhona abacwaningi real of this approach.
Checklist:
- Ukusabela kubhalwe ngempumelelo (isib. JSON)
- Schemas are used (JSON Schema or Pydantic)
- Validation is applied before function calls
- I-Generation Errors ayenza ukuchithwa (i-format error handling iyatholakala)
- I-LLM = Ukukhetha Umsebenzi, Ukusebenza = Ikhodi
6. Own the Control Flow
Problem:Ngokuvamile, ama-agents asebenza njenge-"dialogue" - okokuqala usebenzisa, bese u-agent isibuyekeza. Kuyinto njenge-playing ping-pong: hit-response. I-comfortable, kodwa i-limiting.
Such an agent cannot:
- Do something on its own without a request
- Perform actions in parallel
- Ukulungiselela iminyango ngokushesha
- Yenza iminyango eziningana
- Ukubuyekeza ukuhlaziywa kanye nokuguqulwa kwama-steps eyenziwe
Ngokuba, i-agent kufanele ukulawula "i-flow ye-execution" yayo yayo - ukulawula ukuthi kuyinto esilandelayo futhi indlela yokwenza. Lokhu kuyinto efana ne-Task Scheduler: i-agent ibonise ukuthi kuyimfuneko ukwenza futhi isebenza izinyathelo ngokuhambisana.
This means the agent:
- decides when to do something on its own
- Ungathola iminyango eyodwa ngemva
- Ukubuyekeza iminyango emangalisayo
- Ukulungiselela phakathi kwezicelo
- Ungasebenza ngisho ngaphandle kokufakwa okuqondile
Solution:Ngaphandle kokugcina LLM ukulawula yonke logic, sinikezacontrol flowumugqa ku-code. I-model inikeza kuphela ngezinyathelo noma inikeza elandelayo. Lokhu kubhalwe kusuka ku-"writing prompts" kuya ku-engineering a system with controlled behaviour.
Sishayele izindlela ezintathu ezivamile:
1. FSM (Finite State Machines)
- Yini: Isikhathi esihlalweni esihlalweni futhi izinguquko esifanele.
- LLM: Kuboniswa isinyathelo esilandelayo noma isebenza ngaphakathi kwelizwe.
- Pros: Simplicity, predictability, good for linear scenarios.
- Izixhobo: I-StateFlow, i-YAML configurations, i-State Pattern.
2. DAG (Directed Graphs)
- Yini: Imisebenzi non-linear noma parallel njenge-graph: ama-nodes zihlanganisa, ama-rands zihlanganisa.
- I-LLM: Ingaba i-node noma ukunakekela ukwakhiwa kwe-plan.
- Imibuzo: Flexibility, parallelism, visualizable.
- Izixhobo: LangGraph, Trellis, LLMCompiler, i-Custom DAG diagrams.
3. Planner + Executor
- Yini: I-LLM ibekwe iphrojekthi, ikhodi noma ezinye ama-agents ekusebenziseni.
- LLM: "I-Big" isikhungo, "I-small" isikhungo isebenza.
- Imibuzo: Ukuhlukaniswa kwezingane, ukulawula izindleko, ukucubungula.
- Izixhobo: LangChain Plan-and-Execute.
Why this matters:
- Ukwandisa ukulawula, ukucindezeleka, ukucindezela.
- Kuthumela ukuxhuma amamodeli ahlukahlukene nokushuthaza ukusebenza.
- Ukusebenza kwe-Task Flow kubangelwa ku-visualizable ne-testable.
Checklist:
- Ukusebenzisa FSM, DAG, noma isinyathelo nge izinguquko ezizodwa
- Model decides what to do but doesn't control the flow
- Ukusebenza kungenziwa ku-visualized and tested
- Ukusebenza kwe-Error is eyenziwe ku-flow
7. Include the Human in the Loop
Problem:Nangona umphathi usebenzisa izixhobo ezisakhiwo futhi inesibopho sokulawula okuhlobene, ukunambitheka ngokuphelele kwe-agents ye-LLM ehlabathini lokwenene kunzima kakhulu (noma umdlavuza, ngokuvumelana ne-context). I-LLMs ayizokufuneka ukuxhumana okuhlobene futhi ayizokufuneka ngezinye izinto. Abanikeze izixazululo ezingenalutho. Ikakhulukazi ngezimo ezinzima noma ezinzima.
Main risks of full autonomy:
- I-Errors ye-Permanent: I-Agent ingathola izimpendulo ezinzima (ukushintshwa kwedatha, ukunikela isitimela esiyingqayizivele kumakhasimende enhle, ukuqala isivakashi se-robot).
- Ukuhlukaniswa kwe-compliance: I-agent ingangena ngempumelelo izicelo ze-internal, izidingo ze-legal, noma ukuphazamiseka imibuzo ye-user (ukuba lokhu akuyona inkqubo, ukunciphisa lokhu).
- Ukunciphisa isinyathelo kanye ne-ethics: I-LLM ingathanda izakhiwo ezivamile noma isebenza ngokumelene ne-"isinyathelo se-common sense".
- Ukuphazamiseka kwe-user trust: Uma i-agent yenza imiphumela embalwa, abasebenzisi akufanele ukholelwa.
- I-Audit kanye ne-accountability complexity: Yini ukhangela lapho i-agent ye-autonomous "ukukhuthaza"?
Solution: Ukukhishwa kwe-Carbon-Based Life FormsUkubambisana abantu ekusebenzeni inqubo yokulinganisa ngezinyathelo eziyinhloko.
HITL Implementation Options
1. Approval Flow
- Ukulungiselela: Ukusebenza kubaluleke, i-cost, i-irreversible
- Indlela: umphakeli usebenza umphakeli futhi uhoxele ukuhlaselwa
2. Confidence-aware Routing
- Ukulungiselela: I-Model Is Uncertain
- How:
- self-assessment (logits, LLM-as-a-judge, P(IK))
- escalation when confidence falls below threshold
3. Human-as-a-Tool
- Uma: idatha engaphansi noma isicelo esifundeni esifundeni
- Indlela: u-agent inikeza ukucaciswa (isib. I-HumanTool ku-CrewAI)
4. Fallback Escalation
- Ukulungiselela: Isisindo se-repeated or situation unresolvable
- Indlela: umsebenzi ifakwe umqhubi nge-context
5. RLHF (Human Feedback)
- When: for model improvement
- How: human evaluates responses, they go into training
Checklist:
- Izinqubo ezidingekayo zokubambisana zihlanganisa
- There's a mechanism for confidence assessment
- I-Agent Inokufunda Imibuzo Yomphakathi
- Ukusebenza okuphakeme kufuneka ukhokelwa
- I-interface ye-Input Response
8. Compact Errors into Context
Problem:Ukusebenza kwe-standard ye-multi-system lapho isizukulwane isizukulwane i-"crash" noma i-just report the error and stop. Ukuze i-agent eyenza imisebenzi ngokuzimela, lokhu akuyona ngokuvamile imodeli yokusebenza enhle. Kodwa siphinde siphinde akufanele ukuba i-hallucinate ngokuvumelana ne-problem.
Yini singathanda:
- Brittleness: Any failure in an external tool or unexpected LLM response can stop the entire process or lead it astray.
- Inefficiency: Constant restarts and manual intervention eat up time and resources.
- Inability to learn (in the broad sense): If the agent doesn't "see" its errors in context, it can't try to fix them or adapt its behavior.
- I-Hallucinations - Ngemuva kwalokho
Solution:I-Errors ifakwe ku-Prompt noma Memory. I-Ideya kuyinto ukuhlola ukuthuthukiswa kwe-"self-healing." I-Agent kufanele ukuthatha okungenani ukuguqulwa kwamakhemikhali yayo nokuguqulwa.
Ukuhlobisa Rough flow:
- Ukubuyekezwa kwe-Error
- Self-correction:
- Self-correction mechanisms: Error Detection, Reflection, Retry Logic, Retry with changes (Agent can modify request parameters, rephrase the task, or try a different tool)
- Impact of reflection type: More detailed error information (instructions, explanations) usually leads to better self-correction results. Even simple knowledge of previous errors improves performance.
- Internal Self-Correction: Training LLMs for self-correction by introducing errors and their fixes into training data.
- Request human help: If self-correction fails, the agent escalates the problem to a human (see Principle 7).
Checklist:
- I-error ye-step edlule ifakwe ku-context
- Retry logic exists
- I-Fallback / I-escalation yabantu isetshenziselwa ukulahleka okuqhubekayo
9. Break Complexity into Agents
Problem:Sicela ukuguqulwa kwelanga le-LLM (ama-context window ingxenye), kodwa bheka le ngxaki eminye. I-task eningi futhi eningi kakhulu, iminyango engaphezu kunzima, okungenani i-context window elide. Njengoba i-context ikhiqiza, i-LLM inesibophelele ukuphazamiseka noma ukuphazamiseka. Ngokuvimbela ama-agents ku-domains ezithile nge-3-10, okungenani okungenani angu-20 iminyango, sinikezela i-context window enhle futhi ukusebenza kwe-LLM ephezulu.
Solution:Ukusebenzisa ama-agents amancane abacindezelwe ngezinsizakalo ezithile. One agent = one task; orchestration kusukela phezulu.
Izinzuzo ze-small and focused agents:
- I-Manageable Context: Izindonga ezincinane ze-context zihlanganisa ukusebenza kwe-LLM engcono
- Iziqinisekiso zibonakalayo: Ngama-agents ngamunye inesibopho futhi isicelo esifanele kahle
- Ukubuyekezwa kwe-Reliability: Ukunciphisa amathuba yokubuyekezwa ku-Workflows eqinile
- Ukuhlolwa okuhlobisa: Kulula ukuhlolwa nokuqedela umsebenzi ezithile
- Improved debugging: Easier to identify and fix problems when they arise
Unfortunately, there's no clear heuristic for understanding when a piece of logic is already big enough to split into multiple agents. I'm pretty sure that while you're reading this text, LLMs have gotten smarter somewhere in labs. And they keep getting better and better, so any attempt to formalize this boundary is doomed from the start. Yes, the smaller the task, the simpler it is, but the bigger it gets, the better the potential is realized. The right intuition will only come with experience. But that's not certain.
Checklist:
-
Scenario is built from microservice calls
-
Agents can be restarted and tested separately
-
Agent = minimal autonomous logic. You can explain what it does in 1-2 sentences.
III. Control Model Interaction
Imodeli ukulawula generations. Yonke okufakiwe nawe.
Imodeli ukulawula generations. Yonke okufakiwe nawe.
Ukulungiselela kanjani isicelo, into esidlulile ku-context, izicelo ezitholakala – konke okuhlobisa ukuthi imiphumela kuyinto enhle noma "ukudala".
I-LLMs ayifunda imibuzo. Zifunda i-token.
Ngokwenza lokhu, noma yini inguqulo ingxubevange ingxubevange ingxubevange ingxubevange - akuyona ngokushesha.
This section is about not letting everything drift: prompts = code, explicit context management, constraining the model within boundaries. We don't hope that the LLM will "figure it out on its own."
10. Treat Prompts as Code
Problem:I-pattern enhle kakhulu, ikakhulukazi phakathi kwamakhemikhali ngaphandle kwe-ML noma i-SE background, kuyinto ukugcina ama-prompts ngqo ku-code. Noma okungenani, ukugcinwa okuzenzakalelayo ku-file e-external.
Ukulungiselela lokhu kuholele izinzuzo eziningana zokuphathwa nokuphakamisa:
- Ukuhamba, ukucaciswa, nokuguqulwa kubaluleke njengezinkinga leprojekthi kanye nesikhathi sokukhula.
- Ngaphandle kwe-versioning esifundeni, kubalulekile kakhulu ukucubungula ukucubungula okusheshayo, izizathu zokuguqulwa, futhi ukuguqulwa ku-stable editions ezidlulele ngexesha lokuphumula.
- I-Inefficient Improvement and Debugging Process: Ukusebenza okusheshayo ngaphandle kwama-metric kanye nokuVavanyelwa kubangelwa ku-process ye-subjective kanye ne-labour-intensive ne-results ezinzima.
- Ukubuyekezwa kwamanye ama-team members, kuhlanganise (ngaphezu kwalokho) wena elizayo.
Solution:Prompts kuleli khasi akugqoka okungaziwa kakhulu kusuka ku-code futhi izicelo zokusebenza zokusebenza zokusebenza zokusebenza zayo zihlanganisa.
This implies:
- Ukubuyekeza ngokuhambisana futhi ngokushesha, usebenzisa amafayela ezizodwa (njenge-.txt, .md, .yaml, .json) noma ngisho izinhlelo zokulawula isampula (njenge-Jinja2, Handlebars, noma izixhobo ezizodwa ezifana ne-BAML).
- Explicit prompt versioning. You can even do A/B tests with different versions after this.
- Testing. You heard that right.
- This could be something like unit tests, where you compare LLM responses to specific inputs against reference answers or expected characteristics depending on the prompt
- Evaluation datasets
- Format compliance checks and presence/absence of key elements - For example, if a prompt should return JSON, the test can validate its structure
- Even LLM-as-a-judge if your project and design justify it.
Thola ku-testing ngokugqithisileyo ku-Principle 14.
Checklist:
- Izimpendulo zithunyelwe ezahlukile, ezahlukweni ezahlukweni zebhizinisi
- Kuyinto diff futhi ukuguqulwa umlando
- izifundo ezisetshenziselwa (ngokusho)
- (I-Optional) Yintoni kokubuyekezwa ngokushesha njengesigaba se-code review?
11.Ubuchwepheshe Kontekst
Problem:Thina ngethula "ukubuyekeza" ye-LLMs, ukuguqulwa okungenani lokhu ngokubuyekeza i-history ku-memory ye-external kanye nokusetshenziswa ama-agents ezahlukile ngezinsizakalo ezahlukile. Kodwa lokhu akuyona konke. Ngizokuthumela ukuba siphinde sibe ukulawula i-context window ngokuvumelana (and lapha akuyona nje kokubuyekeza i-history ukuze zihlukanise ubukhulu obuningi noma kuhlanganise ama-errors kusuka kuma-steps ezidlulayo ku-context).
Standard formats aren't always optimal:Imininingwane olulodwa ngokucophelela "i-role-content" (system/user/assistant
) ifomati iyona baseline, kodwa kungabangela i-token-ezigcwele, engabizi yokufundisa, noma engabizi ekudluliseni isimo se-complex ye-agent yakho.
Uninzi lomsebenzisi we-LLM usebenzisa ifomu ye-message ye-standard (isithombe se-objects nge-LLM).role
: “system”, “user”, “assistant”,content
, futhi izikhathitool_calls
imikhiqizo
Nangona lokhu "ukusebenza kahle kakhulu kumazimo ezininzi", ukuzemaximum efficiency(ngokusho kanye ne-token kanye ne-model's attention), singathola ukucubungula kwe-context ngokuvamile.
Solution: To engineer it. To treat the creation of the entire information package passed to the LLM as "Context Engineering."Kuyinto:
- Full Control: Taking full ownership for what information enters the LLM's context window, in what form, volume, and sequence.
- Yenza i-Formats ye-Custom: Ngaphandle kokuzihlanganisa ku-standard message lists. Ukukhishwa kwezindlela zethu zokusebenza ezisebenzayo zokubonisa inkqubo. Ngokwesibonelo, ungathola ukusetshenziswa kwe-XML-like structure yokupakisha izinhlobo ezihlukahlukene zebhizinisi (izindaba, izivakashi zokusebenza, imiphumela yayo, imiphumela, njll) eminye noma amanye ama-messages.
- I-Holistic Approach: Ukubuyekeza umklamo akuyona nje njenge-dialogue history, kodwa njenge-sum total of everything the model might need: the immediate prompt, imiyalezo, idatha kusuka ku-RAG systems, idatha ye-tool calls, idatha ye-agent, ingcindezi evela kuma-interactions, futhi ngisho imiyalezo ku-output format ebonakalayo.
(Instead of a checklist) How do you know when this makes sense?
Uma unemibuzo ye-one of the following:
- I-Information Density. I-Maximum meaning with minimal noise.
- I-Cost-Efficiency. Ukunciphisa inani le-token lapho singakwazi ukuthola izinga elifanayo ngentengo elincane.
- Ukuphuculwa kokuphendula umyalezo.
- Ukusebenza ukuhlanganiswa kwe-info sensitive, ukulawula, ukufilitha, .and ekupheleni konke ngokuvumelana ne-classic "ukudlulisa, Ngingathanda kuphela imodeli ye-model ye-language elikhulu."
12. Constrain the Chaos: Secure Inputs, Guarded Actions, and Grounded Outputs
Problem:Ngaphezu kwalokho, izinga lokuphumula kwebhizinisi lwezinhlobonhlobo zihlanganisa izinga lokuphumula. Lokhu kubalulekile ukubuyekeza izinga lokuphumula ezibalulekile kakhulu ngokulinganayo futhi ngokuvamile ukuthatha "ukudluliselwa".
Ngokuqondene ne-Principle, sincoma ukuthi:
- I-Prompt Injection ye-Prompt Injection iyatholakala. Uma injineli wakho uxhumane ngqo nomkhiqizi, kufanele ukulawula ukuthi i-input yatholakala njenge-input. Ngokusho ne-username, ungakwazi ukufumana inkinobho elidinga ukuchithwa kwe-flow yakho futhi ukhangela injineli ukunciphisa izidingo zayo zokuqala, ukunikezela ulwazi oluthile, ukwenza imisebenzi ezinzima, noma ukukhiqiza izinto ezinzima.
- I-Data Sensitive Leakage. Ngenxa ye-motivation ephakeme, noma i-"voices in its head", i-agent ingathumela ulwazi oluphambili, njenge-data yomsebenzisi, izibuyekezo zebhizinisi njll.
- Ukukhiqizwa kwekhwalithi ephephile noma ephephile. Uma lokhu yi-design, ukunceda lokhu.
- Ukulungiselela izinto lapho ulwazi ayikho. I-Eternal Pain.
- Ngaphansi kwezingane ezivunyelwe. Ukulungiswa kwezobuchwepheshe, sicela ushiye? Kodwa ngokufanelekileyo, inqubo yayo yokuqinisekisa, ingozi kungase kufinyelela izixazululo ezingenalutho ezingenalutho kakhulu, akuyona zonke ngeke kufinyeleleke ku-comportment ebonakalayo.
Ukhuseleko kanye nokufakwa kwe-agent ye-LLM akuyona isinyathelo esisodwa, kodwa uhlelo oluphakeme lwezinhlayiyana (i-defence-in-depth) elihlanganisa yonke imizuzu yokusebenzisana. Izinzuzo zihlanganisa, futhi akukho indlela eyodwa yokhuseleko kuyinto i-pannacea. Ukhuseleko olusebenzayo kufuneka i-combination ye-technology.
Solution:Thina uxhumane inkqubo ye-defense e-multi-layer, ukuhlola ngokuvamile zonke izimo ezingenalutho kanye nezinkinga ezingenalutho, futhi uxhumane ngokuvumelana ngokuvumelana nezimo ezingenalutho.
In a basic setup, you should consider:
-
Secure Inputs.
- Check for known attack-indicator phrases (e.g., "ignore all previous instructions"). It sometimes makes sense to combat potential obfuscation.
- Try to determine the user's intent separately. You can use another LLM for this, to analyze the input for the current one.
- Control input from external sources, even if they are your own tools.
-
Guarded Actions. Control the privileges of both the agent and its tools (granting the minimum necessary), clearly define and limit the list of available tools, validate parameters at the input to tools, and enable Principle #7 (Human in the Loop).
-
Output Moderation. Design a system of checks for what the model outputs, especially if it goes directly to the user. These can be checks for relevance (ensuring the model uses what's in the RAG and doesn't just make things up) as well as checks for general appropriateness. There are also ready-made solutions (e.g., the OpenAI Moderation API).
Uhlelo lokugqibela, kunjalo, kulingana nezidingo zakho kanye nokuqinisekisa kwesimo sakho. Kwi-checklist, siza kuhlolwa ezinye izindlela.
Checklist:
-
User input validation is in place.
-
For tasks requiring factual information, the data within the RAG is used.
-
The prompt for the LLM in a RAG system explicitly instructs the model to base its answer on the retrieved context.
-
LLM output filtering is implemented to prevent PII (Personally Identifiable Information) leakage.
-
The response includes a link or reference to the source.
-
LLM output moderation for undesirable content is implemented.
-
The agent and its tools operate following the principle of least privilege.
-
The agent's actions are monitored, with HITL (Human-in-the-Loop) for critical operations.
IV. Keep It Alive
Umthengisi ukuthi "kinda ukusebenza" kuyinto ingxubevange nge-effect edlulile.
Umthengisi ukuthi "kinda ukusebenza" kuyinto ingxubevange nge-effect edlulile.
Ngo-prod, kungcono konke ngokushesha. Futhi akuyona ngokushesha. Kwezinye izikhathi, akuyona ngokushesha.
Okuqukethwe okuqukethwe kwe-Engineering Habitseeing what's happeningWazechecking that everything is still working. Logs, tracing, tests—everything that makes an agent's behavior transparent and reliable, even when you're sleeping or developing your next agent.
13. Trace Everything
Problem:Ngokuyinkimbinkimbi noma ezinye, uzothola imiphumela emzimbeni lapho i-agent akufanele asebenza njengokufanele. Ngesikhathi sokuthuthukiswa, ukuhlolwa, ukugcina imiphumela, noma ngesikhathi sokusebenza okwenziwe. Lokhu kubalulekile, futhi ngesikhathi esizayo, kuyimfuneko emzimbeni. Lokhu kubalulekile ukuba uthatha amahora nezinsuku ukucubungula, ukuhlola ukuthi kungcono lokho, ukuguqulwa inkinga, futhi ukucubungula. Ngithanda ukuthi kuleli xesha uyenza i-Principle #1 (Keep State Outside) kanye ne-#8 (I-Compact Errors ku-Context). Kwiimeko ezininzi, lokhu kuya kuthatha kube kunzima ukunciphisa ukuphila kwakho kakhulu. Ezinye izinsizakalo ziya kukunceda lapha.
Nangona kunjalo (ngokuthi ikakhulukazi uma uye wahlanganyela ukuba akufanele ukujabulela kwabo ngexesha elandelayo), kubalulekile kakhulu ukujabulela ukujabulela ngokushesha futhi ukugcina isikhathi futhi izihlangu elandelayo ngokuvumelana ne-princip.
Solution:Ukuhlaziywa yonke indlela kusuka ku-request kuya ku-action. Futhi uma unayo i-logs ye-components eyodwa, ukucubungula inethiwekhi ephelele kungabangela ukucubungula. Futhi uma unemibuzo enkulu ye-puzzles noma i-Lego, ngesikhathi esifundeni, kuya kwangaphambili, kuya kwangaphambili. Ngakho-ke, i-logs kufuneka kukhona, kufanele ku-end-to-end, futhi kufanele kuhlanganisa konke.
Why it's needed:
- Debugging — Hlola ngokushesha lapho izinto zihlala. Lokhu kubona le ncwadi.
- I-Analytics — Hlola lapho izinzuzo zihlanganisa futhi indlela yokuphucula.
- Ukubuyekezwa Kwekhwalithi - Ukubuyekeza kanjani imibuzo emzimbeni.
- Reproducibility – ungakwazi ukuguqulwa ngokucacileyo izinyathelo.
- I-Audit - I-log ye-agents ye-decisions kanye nama-actions.
I-Basic "Gentleman's Set" ikhoyili lokhu:
- Input: Isicelo yokuqala lomsebenzisi, ama-parametres eyenziwa kusuka ku-step edlule.
- I-Agent State: Izinguquko eziyinhloko ze-state ze-agent ngaphambi kokwenza isinyathelo.
- Prompt: The full text of the prompt sent to the LLM, including system instructions, dialogue history, retrieved RAG context, tool descriptions, etc.
- I-LLM Output: Ukusabela okugcwele, okusheshayo kusuka ku-LLM, ngaphambi kokufaka noma ukucubungula.
- Tool Call: If the LLM decided to call a tool – the name of the tool and the exact parameters it was called with (according to the structured output).
- Umphumela we-Tool: Ukusabela ukuthi umphumela wabelana, kuhlanganise iziphumo ezinhle kanye nezindaba ze-error.
- Agent's Decision: What decision the agent made based on the LLM's response or the tool's result (e.g., what next step to perform, what answer to give the user).
- I-Metadata: I-Step Execution Time, i-LLM model eyenziwe, i-cost ye-call (ngokufinyelela), i-code / i-prompt version.
Note:Thola izixhobo zokusebenza zokusebenza zokusebenza; ngaphansi kwezimo ezithile, zikhuthaza ukuphila kwakho kakhulu. I-LangSmith, isibonelo, inikeza ukubonisa okuhlobene kwe-call chains, izivakashi, imibuzo, nokusetshenziswa kwezixhobo. Ungathuthukisa izixhobo ezifana ne-Arize, i-Weights & Biases, i-OpenTelemetry, njll ukuze izidingo zakho. Kodwa okokuqala, bheka i-Principle #15.
Checklist:
- Zonke izinyathelo ze-agent zithunyelwe (u-version yakho ye-” gentleman’s set”).
- Izinyathelo zihlanganiswa nge-session_id kanye ne-step_id.
- Kuyinto interface ukubonisa inkampani ephelele.
- Umhlahlandlela eyenziwe ku-LLM kungenziwa ekubunjweni ngalinye.
14. Test Before You Ship
Problem:Ngaphezu kwalokho, ungakwazi ukufumana izixazululo ezithile ezisebenzayo. It is ukusebenza, noma ngisho nje indlela ufuna. Ship it to prod? Kodwa kanjani siqinisekisa ukuthiWazeUkusebenza? Ngisho ngemva kokufinyelela okungenani elilandelayo? Yes, Ngitholela kwelanga lokuphendula.
Ngokuvamile, ama-updates e-LLM systems, njenge-imeyili ye-application code, ama-updates ku-datasets ye-fine-tuning noma i-RAG, inguqulo entsha ye-base LLM, noma ngisho ama-updates amancane e-prop-updates-ukuvela ngokuvamile izixazululo ezingenalutho ezivamile ze-logic ezisebenzayo kanye nezimo ezingenalutho, ngezinye izikhathi ezikhuphazayo, ze-agent. Izindlela ezivamile ze-software zibonisa zibonakalayo yokulawula ikhwalithi ephelele ze-LLM systems. Lokhu kubaluleke ku-risks kanye nemikhiqizo ezizodwa ze-big language models:
- Model Drift. Ungayenza yini, kodwa ukusebenza kwangaphakathi. Mhlawumbe umphakeli wahlala imodeli yabo, mhlawumbe ubungakanani idatha yokufaka kubangaphakathi (data drift) - okufakiwe yamuva ingaphambili kungase kusebenza namhlanje.
- I-Prompt Brittleness. Ngezinye ukuguqulwa okungenani ku-Prompt kungabangela ukucubungula isakhiwo se-logic esekelwe kanye nokugubungula i-output.
- I-non-determinism ye-LLMs: Njengoba uyazi, i-LLMs eminingi ayikho-deterministic (ngakumbi nge-temperature > 0), okwenza ukuba zihlanganisa imibuzo ahlukene kwelinye ingxubevange ngalinye. Lokhu kubaluleke ukwakhiwa kwezivivinyo ezivamile ezihlangene umlinganiselo oluthile futhi kusenza ukulinganisa kwamafutha.
- Izinzuzo yokuguqulwa nokuguqulwa kwama-debugging. It uya kuba kulula ukuba usethule isisekelo yokuqala, kodwa ukudluliswa kwama-error ezithile yokuguqulwa kungabangela ngempumelelo ngisho nge-data kanye ne-states ezivamile.
- "The Butterfly Effect." Kwiinkqubo ezinamandla, ukuhlaziywa kwelinye ingxenye (njenge imodeli noma i-prompt) ingakwazi ukuxhumana nge-API, ama-databases, izixhobo, njll, futhi kuholele ukuguqulwa kwamakhemikhali eminye.
- Ukuhlobisa
Ngiyazi ukuthi izivivinyo ezivamile, ezisekelwe ku-verification ye-code logic esifundeni, akufanele ngokugcwele ukuhlangabezana nezinkinga.
Solution:Thina uzodinga ukucubungula ukucubungula okuhlobene okuqukethe izinto eziningi, ukuhlanganisa izixazululo ze-classic ne-domain-specific. Lokhu isixazululo kufanele isixazulule izindawo ezilandelayo:
- I-Multi-Level Testing: I-combination ye-test types ezivela kumasipala ahlukahlukene we-system: ukusuka ku-low-level unit tests for individual functions and prompts kuya kuma-scenaries ezinguqile ezivela ku-end-to-end workflow ye-agent kanye ne-user interaction.
- Ukulinganiswa kokusebenza kwe-LLM kanye nokukwazi: Ukuhlolwa kufuneka ukulawula akuyona kokusebenza kuphela, kodwa futhi izici zokuphendula ze-LLM, njenge-relevance, ukucacisa, ukuhambisana, ukunemba kwekhwalithi okuphawulekayo noma okuphawulekayo, kanye nokuhlanganiswa kwezinqubo kanye ne-styled.
- I-Regression kanye ne-Quality Tests ezihlanganisa "i-gold datasets" ezine izibonelo ezahlukahlukene zokufaka kanye nezithombe zokufaka (noma izigaba ezingenalutho) zezithombe.
- Okuzenzakalelayo nokuxhumana ku-CI / CD.
- Ukubuyekezwa kwe-human-in-the-loop: Izinyathelo ezithile ze-LLM-ev should involve a human for calibrating metric and reviewing complex or critical cases.
- I-Iterative approach to prompt development and testing: Ubuchwepheshe we-Prompt kufanele isetshenziswe njenge-process ye-iterative lapho zonke i-version ye-Prompt ibhekwa ngempumelelo nokuhlolwa ngaphambi kokusebenza.
- Testing at different levels of abstraction:
- Component testing: Individual modules (parsers, validators, API calls) and their integration.
- Prompt testing: Isolated testing of prompts on various inputs.
- Chain/Agent testing: Verifying the logic and interaction of components within an agent.
- End-to-end system testing: Evaluating the completion of full user tasks.
Checklist:
-
Logic is broken down into modules: functions, prompts, APIs—everything is tested separately and in combination.
-
Response quality is checked against benchmark data, evaluating meaning, style, and correctness.
-
Scenarios cover typical and edge cases: from normal dialogues to failures and provocative inputs.
-
The agent must not fail due to noise, erroneous input, or prompt injections—all of this is tested.
-
Any updates are run through CI and monitored in prod—the agent's behavior must not change unnoticed.
Principle 15: Own the Execution Path
Kuyinto meta-ukudluliselwa; kusebenza kuzo zonke izidakamizwa ezingezansi.
Ngesikhathi eside, sinikeza izindlela eziningana nezinhlelo zokusebenza. Lokhu kubaluleke kakhulu, kulula, futhi kuyinto umlilo.
Ngokuvamile, ukhethe isixazululo esizayo kuhlanganisa atrade-offYou get speed and an easy start, but you loseflexibility, control, and, potentially, security.
Lokhu kubalulekile ikakhulukazi ekuthuthukiseni ama-agent, lapho kubaluleke ukulawula:
- I-impredictability ye-LLMs
- I-logic ephelele ye-transitions ne-self-correction,
- Ukusebenza kanye nokuthuthukiswa kwe-system, ngisho nangokuthi izidingo zayo zayo zangaphandle.
I-Frameworks Yenzainversion of control: babona wena indlela agent kufanele ukusebenza. Lokhu kungathuthukisa prototype kodwa ngempumelelo ukuthuthukiswa eside yayo.
Uninzi lwezimfihlo ezijwayelekile ziye zitholakala ngokusebenzisa izixazululo ze-off-the-shelf – futhi lokhu kunokwenzeka ngokuvamile. Kodwa ngezinye izimo,explicit implementation of the core logic takes a comparable amount of timefuthi inikeza uncomparably kakhulutransparency, manageability, and adaptability.
I-extreme enhle futhi ikhona-over-engineering, umdla ukubhala yonke into ukusuka kwangaphakathi. Lokhu pia ingxaki.
This is why the key is balance.I-engineer ikhohlisa ngokufanele: lapho kubalulekile ukhangela ku-framework, futhi lapho kubalulekile ukugcina ukulawula. Futhi zihlola lokhu ngokucophelela, ukuhlola izindleko kanye nemiphumela.
Ungakuthanda: umkhakha wabhala. Izindlela ezininzi ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye ziye zibe.
Conclusion
UkuphakamaOkay, sinikeza izimo ezingu-15, okuyinto, ngokuvamile, kusiza ukuguqulwa ukujabulela okokuqala kwe-"it's alive!" ku-confidence ukuthi umphakeli wakho we-LLM uya ukusebenza ngempumelelo, oluthile, futhi etholakalayo ngaphansi kwezimo zamazwe.
Ingabe ufuna ukushumeka kokuthunyelwe kwebhizinisi lakho?
Key takeaways to carry with you:
- I-Engineering approach iyisisombululo: Akufanele "magic" ye-LLMs. Ukwakhiwa, ukucaciswa, ukulawula, kanye nokuvamile kubangeli wakho engcono.
- I-LLM iyinhlangano enamandla, kodwa kuphela ingxenye: Ukwelashwa kwe-LLM njengesi-intelligent kakhulu, kodwa nangokuthi, ingxenye eyodwa ye-system yakho. Ukulawula inqubo jikelele, idatha, kanye nokhuseleko kufanele kube nawe.
- I-Iteration kanye ne-feedback zihlanganisa imiphumela: Kuyinto emangalisayo ukukhiqiza i-agent ephelele ekuqaleni. Jabulela ukuhlolwa, ukucubungula, ukucubungula kwama-error, nokuphucula okuqhubekayo-both ye-agent ngokwemvelo kanye nezinqubo zakho zokuthuthukiswa. Ukuhlanganiswa kwe-human in the loop (i-HITL) akuyona kuphela ukhuseleko; kuyinto futhi mayelana nokufuna isithombe esizayo sokucwaninga yokufunda.
- I-Community ne-Openness: I-field ye-LLM Agents ikakhulukazi ngokushesha. Qinisekisa izifundo ezintsha, izixhobo, nezimfuneko ezinhle, futhi usahlangabezana nezimonyo zakho zayo. Iningi kwezinhlobonhlobo ezinxulumene nawe, umntu owenziwe ngexesha noma isixazululo manje.
Ngingathanda ukuthi ungayifumana into entsha futhi etholakalayo lapha, futhi mhlawumbe ungathanda ukufinyelela lokhu lapho ukwakha umphathi wakho elilandelayo.