I-Generative Data Intelligence

Amaphethini e-RAG athuthukile ku-Amazon SageMaker | Izinsizakalo Zewebhu ze-Amazon

Usuku:

Namuhla, amakhasimende azo zonke izimboni—noma ngabe izinsizakalo zezezimali, ukunakekelwa kwezempilo kanye nesayensi yezempilo, ezokuvakasha nezokuvakasha, imidiya nokuzijabulisa, ezokuxhumana, isofthiwe njengesevisi (i-SaaS), kanye nabahlinzeki bemodeli yobunikazi—basebenzisa izinhlobo ezinkulu zezilimi (LLMs) ukuze yakha izinhlelo zokusebenza ezifana nama-chatbots emibuzo nezimpendulo (QnA), izinjini zokusesha, nezisekelo zolwazi. Lezi i-AI ekhiqizayo izinhlelo zokusebenza azisetshenziselwa ukwenza ngokuzenzakalelayo izinqubo zebhizinisi ezikhona, kodwa futhi zinamandla okuguqula ulwazi kumakhasimende asebenzisa lezi zinhlelo zokusebenza. Ngokuthuthuka okwenziwa ngama-LLM afana ne Imiyalo ye-Mixtral-8x7B, esuselwe kuzakhiwo ezifana ne- ingxube yochwepheshe (MoE), amakhasimende aqhubeka efuna izindlela zokuthuthukisa ukusebenza nokunemba kwezinhlelo ezikhiqizayo ze-AI kuyilapho ebavumela ukuthi basebenzise ngempumelelo uhla olubanzi lwamamodeli omthombo ovaliwe novulekile.

Amasu amaningi ajwayele ukusetshenziselwa ukuthuthukisa ukunemba nokusebenza kokuphumayo kwe-LLM, njengokuhlela kahle ukushuna kahle kwepharamitha (PEFT), ukuqinisa ukufunda kusuka kumpendulo yomuntu (RLHF), nokwenza ulwazi distillation. Kodwa-ke, lapho wakha izinhlelo zokusebenza ze-AI ezikhiqizayo, ungasebenzisa esinye isixazululo esivumela ukuhlanganiswa okunamandla kolwazi lwangaphandle futhi ikuvumela ukuthi ulawule ulwazi olusetshenziselwa ukukhiqiza ngaphandle kwesidingo sokulungisa kahle imodeli yakho eyisisekelo ekhona. Kulapho i-Retrieval Augmented Generation (RAG) ingena khona, ikakhulukazi izinhlelo zokusebenza ezikhiqizayo ze-AI ngokuphambene nezinye ezibiza kakhulu neziqinile ezinye izindlela zokulungisa kahle esixoxile ngazo. Uma usebenzisa izinhlelo ze-RAG eziyinkimbinkimbi emisebenzini yakho yansuku zonke, ungase uhlangabezane nezinselele ezivamile kumasistimu akho e-RAG njengokubuyiswa okungalungile, ukukhulisa usayizi nokuba yinkimbinkimbi kwamadokhumenti, nokuchichima komongo, okungaba nomthelela omkhulu kwikhwalithi nokuthembeka kwezimpendulo ezikhiqiziwe. .

Lokhu okuthunyelwe kudingida amaphethini e-RAG ukuze kuthuthukiswe ukunemba kwezimpendulo kusetshenziswa i-LangChain namathuluzi afana nesitholi sedokhumenti yomzali ngaphezu kwamasu afana nokuminyanisa komongo ukuze onjiniyela bakwazi ukuthuthukisa izinhlelo zokusebenza ezikhona ze-AI ezikhiqizayo.

Ukubukwa kwesisombululo

Kulokhu okuthunyelwe, sibonisa ukusetshenziswa kokukhiqizwa kombhalo we-Mixtral-8x7B Instruction ehlanganiswe nemodeli yokushumeka ye-BGE Large En ukuze kwakhiwe kahle uhlelo lwe-RAG QnA kubhukwana le-Amazon SageMaker lisebenzisa ithuluzi lokubuyisela idokhumenti yomzali kanye nezindlela zokucindezelwa komongo. Umdwebo olandelayo ubonisa ukwakheka kwalesi sixazululo.

Ungakwazi ukusebenzisa lesi sixazululo ngokuchofoza okumbalwa nje usebenzisa I-Amazon SageMaker JumpStart, inkundla ephethwe ngokugcwele enikeza amamodeli esisekelo asezingeni eliphezulu ezimo ezihlukahlukene zokusetshenziswa njengokubhala okuqukethwe, ukukhiqizwa kwekhodi, ukuphendula imibuzo, ukubhala ngokukopisha, ukufingqa, ukuhlukanisa, nokubuyiswa kolwazi. Ihlinzeka ngeqoqo lamamodeli aqeqeshwe ngaphambilini ongakwazi ukuwasebenzisa ngokushesha futhi kalula, usheshise ukuthuthukiswa nokuphakwa kwezinhlelo zokusebenza zokufunda ngomshini (ML). Enye yezingxenye ezibalulekile ze-SageMaker JumpStart i-Model Hub, enikeza ikhathalogi enkulu yamamodeli aqeqeshwe ngaphambilini, njenge-Mixtral-8x7B, yemisebenzi eyahlukene.

I-Mixtral-8x7B isebenzisa i-architecture ye-MoE. Lesi sakhiwo sivumela izingxenye ezihlukene zenethiwekhi ye-neural ukuthi zenze imisebenzi eyahlukene, zihlukanise ngempumelelo umsebenzi phakathi kochwepheshe abaningi. Le ndlela ivumela ukuqeqeshwa okuphumelelayo nokusetshenziswa kwamamodeli amakhulu uma kuqhathaniswa nezakhiwo zendabuko.

Enye yezinzuzo eziyinhloko ze-architecture ye-MoE ukulinganisa kwayo. Ngokusabalalisa umthwalo womsebenzi kubo bonke ochwepheshe abaningi, amamodeli e-MoE angaqeqeshwa kumasethi edatha amakhulu futhi azuze ukusebenza okungcono kunamamodeli avamile anosayizi ofanayo. Ukwengeza, amamodeli e-MoE angasebenza kahle kakhulu ngesikhathi sokucatshangelwa ngoba ingxenye encane yochwepheshe kuphela edinga ukwenziwa isebenze ukuze kube nokufaka okunikeziwe.

Ukuze uthole ulwazi olwengeziwe nge-Mixtral-8x7B Instruct ku-AWS, bheka I-Mixtral-8x7B isiyatholakala e-Amazon SageMaker JumpStart. Imodeli ye-Mixtral-8x7B yenziwa itholakale ngaphansi kwelayisensi ye-Apache 2.0 evumelekile, ukuze isetshenziswe ngaphandle kwemikhawulo.

Kulesi sihloko, sizoxoxa ngokuthi ungasebenzisa kanjani I-LangChain ukwakha izinhlelo zokusebenza ze-RAG ezisebenzayo nezisebenza kahle kakhulu. I-LangChain iyilabhulali ye-Python yomthombo ovulekile eyenzelwe ukwakha izinhlelo zokusebenza ngama-LLM. Ihlinzeka ngohlaka oluyimodulayo noluguquguqukayo lokuhlanganisa ama-LLM nezinye izingxenye, njengezisekelo zolwazi, izinhlelo zokubuyisa, namanye amathuluzi e-AI, ukuze kwakheke izinhlelo zokusebenza ezinamandla nezingenziwa ngokwezifiso.

Sihamba ngokwakha ipayipi le-RAG ku-SageMaker nge-Mixtral-8x7B. Sisebenzisa imodeli yokukhiqiza umbhalo ye-Mixtral-8x7B Yala ngemodeli yokushumeka ye-BGE Large En ukuze sakhe isistimu ye-QnA esebenza kahle sisebenzisa i-RAG kubhukwana le-SageMaker. Sisebenzisa isibonelo se-ml.t3.medium ukuze sibonise ukuthunyelwa kwe-LLM nge-SageMaker JumpStart, engafinyelelwa ngephoyinti lokugcina le-API elakhiwe yi-SageMaker. Lokhu kusetha kuvumela ukuhlola, ukuhlola, kanye nokwenza kahle kwamasu e-RAG athuthukisiwe nge-LangChain. Futhi sibonisa ukuhlanganiswa kwesitolo sokushumeka kwe-FAISS ekuhambeni komsebenzi we-RAG, sigqamisa indima yaso ekugcineni nasekubuyiseni okushumekiwe ukuze kuthuthukiswe ukusebenza kwesistimu.

Senza uhambo olufushane lwe-notebook ye-SageMaker. Ukuze uthole imiyalelo enemininingwane eminingi nesinyathelo ngesinyathelo, bheka ku Amaphethini e-RAG athuthukisiwe ane-Mixtral ku-SageMaker Jumpstart GitHub repo.

Isidingo samaphethini e-RAG athuthukile

Amaphethini e-RAG athuthukile abalulekile ukuze kuthuthukiswe amandla amanje e-LLM ekucubunguleni, ekuqondeni, nasekukhiqizeni umbhalo ofana nomuntu. Njengoba ubukhulu nobunkimbinkimbi bamadokhumenti bukhula, ukumelela izici eziningi zedokhumenti ekushumekeni okukodwa kungaholela ekulahlekelweni kokucacisile. Nakuba kubalulekile ukuthwebula ingqikithi evamile yedokhumenti, kubalulekile ngokulinganayo ukubona nokumela izingqikithi ezincane ezihlukene ngaphakathi. Lena inselele ovame ukubhekana nayo lapho usebenza ngamadokhumenti amakhulu. Enye inselele nge-RAG ukuthi uma usuyitholile, awuyazi imibuzo ethile uhlelo lwakho lokugcina amadokhumenti oluzobhekana nayo lapho iwudla. Lokhu kungaholela olwazini olubaluleke kakhulu embuzweni ogqitshwe ngaphansi kombhalo (ukuchichima kokuqukethwe). Ukuze unciphise ukwehluleka futhi uthuthuke kusakhiwo esikhona se-RAG, ungasebenzisa amaphethini e-RAG athuthukile (isitholi sedokhumenti yomzali kanye nokuminyanisa kokuqukethwe) ukuze unciphise amaphutha okubuyisa, uthuthukise ikhwalithi yezimpendulo, futhi unike amandla ukuphathwa kwemibuzo eyinkimbinkimbi.

Ngamasu okuxoxwe ngawo kulokhu okuthunyelwe, ungakwazi ukubhekana nezinselele ezibalulekile ezihlobene nokubuyiswa kolwazi lwangaphandle kanye nokuhlanganiswa, okwenza uhlelo lwakho lokusebenza lulethe izimpendulo ezinembayo neziqaphela umongo.

Ezigabeni ezilandelayo, sihlola ukuthi kanjani izitholi zedokhumenti yomzali futhi ukucindezelwa komongo ingakusiza ukuthi ubhekane nezinye zezinkinga esixoxile ngazo.

Isitholi sedokhumenti yomzali

Esigabeni esidlule, sigqamise izinselele izicelo ze-RAG ezihlangabezana nazo lapho zisebenzelana nemibhalo ebanzi. Ukubhekana nalezi zinselelo, izitholi zedokhumenti yomzali hlukanisa futhi uqoke amadokhumenti angenayo njenge amadokhumenti abazali. Lawa madokhumenti aqashelwa ngobunjalo bawo obuphelele kodwa awasetshenziswa ngokuqondile efomini lawo langempela ukuze ashumekwe. Kunokuba bacindezele idokhumenti yonke ekushumekeni okukodwa, abatholi bedokhumenti yomzali bahlukanisa le mibhalo engumzali ibe yikhona amadokhumenti engane. Idokhumenti ngayinye yengane ithwebula izici ezihlukile noma izihloko ezivela kudokhumenti yomzali ebanzi. Ngemva kokukhonjwa kwalawa masegimenti ezingane, ukushumeka ngakunye kwabelwa ngayinye, kuthwebula ingqikithi yayo ethize (bona umdwebo olandelayo). Ngesikhathi sokubuyisa, idokhumenti yomzali iyasetshenziswa. Le nqubo ihlinzeka ngamakhono okusesha aqondisiwe kodwa abanzi, enikeza i-LLM umbono obanzi. Izitholi zamadokhumenti angabazali zinikeza ama-LLM ngenzuzo ekabili: ukucaciswa kokushumeka kwamadokhumenti ezingane ukuze athole ulwazi olunembile nolufanele, okuhambisana nokucela kwamadokhumenti angabazali ukuze kuphendulwe impendulo, okucebisa imiphumela ye-LLM ngendawo enezingqimba kanye nephelele.

Ukucindezelwa kokuqukethwe

Ukubhekana nenkinga yokuchichima komongo okukhulunywe ngakho ngaphambili, ungasebenzisa ukucindezelwa komongo ukucindezela nokuhlunga amadokhumenti abuyisiwe ngokuqondana nomxholo wombuzo, ngakho-ke ulwazi olubalulekile kuphela olugcinwa futhi lucutshungulwe. Lokhu kufinyelelwa ngenhlanganisela yesitholi esiyisisekelo sokulanda idokhumenti yokuqala kanye ne-compressor yedokhumenti yokucwenga le mibhalo ngokuhlukanisa okuqukethwe kwayo noma ukungayifaki ngokuphelele ngokusekelwe ekuhlobaneni kwayo, njengoba kuboniswe kumdwebo olandelayo. Le ndlela ehlelekile, eqhutshwa i-retriever yokucindezela komongo, ithuthukisa kakhulu ukusebenza kahle kohlelo lwe-RAG ngokunikeza indlela yokukhipha nokusebenzisa kuphela lokho okubalulekile kunqwaba yolwazi. Ibhekana nendaba yokugcwala kakhulu kolwazi kanye nokucutshungulwa kwedatha okungabalulekile, okuholela ekuthuthukisweni kwekhwalithi yokuphendula, imisebenzi ye-LLM eyonga kakhulu, kanye nenqubo yokulanda yonke elula. Empeleni, yisihlungi esihlanganisa ulwazi oluhambisana nombuzo okhona, okulenza libe yithuluzi elidingeka kakhulu lonjiniyela abahlose ukuthuthukisa izinhlelo zabo zokusebenza ze-RAG ukuze zisebenze kangcono kanye nokwaneliseka komsebenzisi.

Okudingekayo

Uma umusha ku-SageMaker, bheka ku- I-Amazon SageMaker Development Guide.

Ngaphambi kokuthi uqale ngesixazululo, dala i-akhawunti ye-AWS. Uma udala i-akhawunti ye-AWS, uthola umazisi owodwa wokungena (i-SSO) onokufinyelela okuphelele kuzo zonke izinsiza ze-AWS nezinsiza ku-akhawunti. Lobu bunikazi bubizwa nge-akhawunti ye-AWS umsebenzisi wempande.

Ingena ngemvume ku- I-AWS Management Console usebenzisa ikheli le-imeyili nephasiwedi oyisebenzise ukudala i-akhawunti kukunikeza ukufinyelela okuphelele kuzo zonke izinsiza ze-AWS ku-akhawunti yakho. Sincoma ngokuqinile ukuthi ungasebenzisi umsebenzisi oyimpande emisebenzini yansuku zonke, ngisho neyokuphatha.

Kunalokho, bambelela ku- izindlela ezingcono kakhulu zokuvikela in Ubunikazi be-AWS Nokuphathwa Kokufinyelela (IAM), kanye dala umsebenzisi ophethe kanye neqembu. Bese ukhiyela ngokuphephile izifakazelo zomsebenzisi oyimpande futhi uzisebenzise ukwenza ama-akhawunti ambalwa kanye nemisebenzi yokuphatha isevisi.

Imodeli ye-Mixtral-8x7b idinga isibonelo se-ml.g5.48xlarge. I-SageMaker JumpStart ihlinzeka ngendlela eyenziwe lula yokufinyelela nokuphakela imithombo evulekile engaphezu kweyi-100 ehlukene kanye namamodeli esisekelo enkampani yangaphandle. Ukuze yethula isiphetho sokusingatha i-Mixtral-8x7B evela ku-SageMaker JumpStart, ungase udinge ukucela ukukhushulwa kwesabelo sesevisi ukuze ufinyelele isibonelo esikhulu se-ml.g5.48x ukuze usebenzise indawo yokugcina. Ungakwazi cela ukwenyuswa kwesabelo senkonzo nge-console, I-AWS Command Line Interface (AWS CLI), noma i-API ukuvumela ukufinyelela kulezo zinsiza ezengeziwe.

Setha isibonelo se-notebook ye-SageMaker bese ufaka ukuncika

Ukuze uqalise, dala isibonelo se-notebook ye-SageMaker bese ufaka ukuncika okudingekayo. Bheka ku- GitHub repo ukuqinisekisa ukusethwa okuyimpumelelo. Ngemuva kokuthi usethe isibonelo se-notebook, ungaphakela imodeli.

Ungaphinda usebenzise incwadi yamanothi endaweni endaweni oyithandayo yokuthuthukisa okudidiyelwe (IDE). Qiniseka ukuthi ufake ilebhu ye-Jupyter notebook.

Hambisa imodeli

Hambisa imodeli ye-Mixtral-8X7B Yala LLM ku-SageMaker JumpStart:

# Import the JumpStartModel class from the SageMaker JumpStart library
from sagemaker.jumpstart.model import JumpStartModel

# Specify the model ID for the HuggingFace Mixtral 8x7b Instruct LLM model
model_id = "huggingface-llm-mixtral-8x7b-instruct"
model = JumpStartModel(model_id=model_id)
llm_predictor = model.deploy()

Sebenzisa imodeli yokushumeka ye-BGE Large En ku-SageMaker JumpStart:

# Specify the model ID for the HuggingFace BGE Large EN Embedding model
model_id = "huggingface-sentencesimilarity-bge-large-en"
text_embedding_model = JumpStartModel(model_id=model_id)
embedding_predictor = text_embedding_model.deploy()

Setha i-LangChain

Ngemva kokungenisa yonke imitapo yolwazi edingekayo nokusebenzisa imodeli ye-Mixtral-8x7B kanye nemodeli yokushumeka ye-BGE Large En, ungakwazi manje ukumisa i-LangChain. Ukuze uthole imiyalelo yesinyathelo ngesinyathelo, bheka ku- GitHub repo.

Ukulungiswa kwedatha

Kulokhu okuthunyelwe, sisebenzisa iminyaka eminingana ye-Amazon's Letters to Shareholders njengekhorasi yombhalo ukuze senze i-QnA kuyo. Ukuze uthole izinyathelo ezinemininingwane eminingi yokulungisa idatha, bheka ku GitHub repo.

Ukuphendula umbuzo

Uma idatha isilungisiwe, ungasebenzisa isembozo esinikezwe i-LangChain, esigoqa isitolo se-vector futhi sithathe okokufaka ku-LLM. Le wrapper yenza izinyathelo ezilandelayo:

  1. Thatha umbuzo wokufaka.
  2. Dala ukushumeka kombuzo.
  3. Landa amadokhumenti afanelekile.
  4. Faka amadokhumenti kanye nombuzo kusaziso.
  5. Cela imodeli ngomyalo futhi ukhiqize impendulo ngendlela efundekayo.

Manje njengoba isitolo se-vector sesisendaweni, ungaqala ukubuza imibuzo:

prompt_template = """<s>[INST]
{query}
[INST]"""
PROMPT = PromptTemplate(
    template=prompt_template, input_variables=["query"]
)
query = "How has AWS evolved?"
answer = wrapper_store_faiss.query(question=PROMPT.format(query=query), llm=llm)
print(answer)
AWS, or Amazon Web Services, has evolved significantly since its initial launch in 2006. It started as a feature-poor service, offering only one instance size, in one data center, in one region of the world, with Linux operating system instances only. There was no monitoring, load balancing, auto-scaling, or persistent storage at the time. However, AWS had a successful launch and has since grown into a multi-billion-dollar service.

Over the years, AWS has added numerous features and services, with over 3,300 new ones launched in 2022 alone. They have expanded their offerings to include Windows, monitoring, load balancing, auto-scaling, and persistent storage. AWS has also made significant investments in long-term inventions that have changed what's possible in technology infrastructure.

One example of this is their investment in chip development. AWS has also seen a robust new customer pipeline and active migrations, with many companies opting to move to AWS for the agility, innovation, cost-efficiency, and security benefits it offers. AWS has transformed how customers, from start-ups to multinational companies to public sector organizations, manage their technology infrastructure.

Iketango elijwayelekile le-retriever

Esimeni esandulele, sihlole indlela esheshayo neqondile yokuthola impendulo eqaphela umongo embuzweni wakho. Manje ake sibheke inketho engenziwa ngendlela oyifisayo ngosizo lwe-RetrievalQA, lapho ungenza khona ngendlela oyifisayo ukuthi imibhalo elandiwe kufanele yengezwe kanjani ekwazisweni usebenzisa ipharamitha ye-chain_type. Futhi, ukuze ulawule ukuthi mangaki amadokhumenti afanelekile okufanele abuyiswe, ungashintsha ipharamitha engu-k kukhodi elandelayo ukuze ubone okuphumayo okuhlukile. Ezimweni eziningi, ungase ufune ukwazi ukuthi iyiphi idokhumenti yomthombo esetshenziswe i-LLM ukwenza impendulo. Ungathola lawo madokhumenti kokuphumayo usebenzisa return_source_documents, ebuyisela amadokhumenti engezwe kumongo wokwaziswa kwe-LLM. I-RetrievalQA futhi ikuvumela ukuthi unikeze isifanekiso sokwaziswa ngokwezifiso esingaba sobala kumodeli.

from langchain.chains import RetrievalQA

prompt_template = """<s>[INST]
Use the following pieces of context to provide a concise answer to the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.

{context}

Question: {question}

[INST]"""
PROMPT = PromptTemplate(
    template=prompt_template, input_variables=["context", "question"]
)

qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore_faiss.as_retriever(
        search_type="similarity", search_kwargs={"k": 3}
    ),
    return_source_documents=True,
    chain_type_kwargs={"prompt": PROMPT}
)

Ake sibuze umbuzo:

query = "How did AWS evolve?"
result = qa({"query": query})
print(result['result'])
AWS (Amazon Web Services) evolved from an initially unprofitable investment to an $85B annual revenue run rate business with strong profitability, offering a wide range of services and features, and becoming a significant part of Amazon's portfolio. Despite facing skepticism and short-term headwinds, AWS continued to innovate, attract new customers, and migrate active customers, offering benefits such as agility, innovation, cost-efficiency, and security. AWS also expanded its long-term investments, including chip development, to provide new capabilities and change what's possible for its customers.

Iketango lokubuyisa idokhumenti yomzali

Ake sibheke inketho ye-RAG ethuthuke kakhulu ngosizo lwe ParentDocumentRetriever. Lapho usebenza ngokubuyiswa kwedokhumenti, ungase uhlangabezane nokuhwebelana phakathi kokugcina izingcezu ezincane zedokhumenti ukuze ushumekwe ngokunembile namadokhumenti amakhulu ukuze kulondolozwe okuqukethwe okwengeziwe. Isitholi sedokhumenti engumzali sishaya leyo bhalansi ngokuhlukanisa nokugcina izingcezu ezincane zedatha.

Sisebenzisa i parent_splitter ukuhlukanisa amadokhumenti okuqala abe yizingxenyana ezinkulu ezibizwa ngokuthi amadokhumenti abazali kanye a child_splitter ukwakha amadokhumenti engane amancane kusuka kumadokhumenti okuqala:

# This text splitter is used to create the parent documents
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000)

# This text splitter is used to create the child documents
# It should create documents smaller than the parent
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)

# The vectorstore to use to index the child chunks
vectorstore_faiss = FAISS.from_documents(
    child_splitter.split_documents(documents),
    sagemaker_embeddings,
)

Amadokhumenti engane abe esekhonjiswa esitolo se-vector kusetshenziswa okushumekiwe. Lokhu kunikeza amandla ukubuyisa okuphumelelayo kwamadokhumenti engane afanelekile ngokusekelwe ekufananeni. Ukuthola ulwazi olufanele, isitholi sedokhumenti yomzali siqale silande amadokhumenti engane esitolo se-vector. Ibe isibheka omazisi bomzali balawo madokhumenti ezingane bese ibuyisela amadokhumenti omzali amakhulu ahambisanayo.

qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=retriever,
    return_source_documents=True,
    chain_type_kwargs={"prompt": PROMPT}
)

Ake sibuze umbuzo:

query = "How did AWS evolve?"
result = qa({"query": query})
print(result['result'])
AWS (Amazon Web Services) started with a feature-poor initial launch of the Elastic Compute Cloud (EC2) service in 2006, providing only one instance size, in one data center, in one region of the world, with Linux operating system instances only, and without many key features like monitoring, load balancing, auto-scaling, or persistent storage. However, AWS's success allowed them to quickly iterate and add the missing capabilities, eventually expanding to offer various flavors, sizes, and optimizations of compute, storage, and networking, as well as developing their own chips (Graviton) to push price and performance further. AWS's iterative innovation process required significant investments in financial and people resources over 20 years, often well in advance of when it would pay out, to meet customer needs and improve long-term customer experiences, loyalty, and returns for shareholders.

Iketango lokucindezela lokuqukethwe

Ake sibheke enye inketho ye-RAG ethuthukisiwe ebizwa ukucindezelwa komongo. Enye inselele ngokubuyiswa ukuthi ngokuvamile asiyazi imibuzo ethile isistimu yakho yokugcina amadokhumenti ezobhekana nayo lapho ungenisa idatha kusistimu. Lokhu kusho ukuthi ulwazi olubaluleke kakhulu embuzweni lungase lungcwatshwe kudokhumenti enombhalo omningi ongabalulekile. Ukudlulisa leyo dokhumenti egcwele ngesicelo sakho kungaholela ezingcingweni ezibizayo ze-LLM nezimpendulo ezimpofu.

I-retrieter yokucindezela komongo ibhekana nenselele yokubuyisa ulwazi olufanele ohlelweni lokugcina amadokhumenti, lapho idatha efanelekile ingase ingcwatshwe ngaphakathi kwamadokhumenti aqukethe umbhalo omningi. Ngokucindezela nokuhlunga amadokhumenti abuyisiwe ngokusekelwe emongweni wombuzo onikeziwe, kubuyiselwa kuphela ulwazi olufanele kakhulu.

Ukuze usebenzise i-retriever yokucindezela komongo, uzodinga:

  • I-Base Retriever - Lesi isitholi sokuqala esilanda amadokhumenti ohlelweni lokulondoloza ngokusekelwe embuzweni
  • I-compressor yedokhumenti - Le ngxenye ithatha amadokhumenti atholakele ekuqaleni futhi iwafinyeze ngokunciphisa okuqukethwe kwemibhalo ngayinye noma ukulahla amadokhumenti angabalulekile ngokuphelele, kusetshenziswa umongo wombuzo ukuze kutholwe ukuhambisana

Ingeza ukuminyaniswa kokuqukethwe nge-LLM chain extractor

Okokuqala, faka i-retriever yakho nge- ContextualCompressionRetriever. Uzongeza i I-LLMChainExtractor, ezophindaphinda kumadokhumenti abuyiselwe ekuqaleni futhi ikhiphe kokuqukethwe ngakunye kuphela okuhlobene nombuzo.

from langchain.retrievers import ContextualCompressionRetrieverfrom langchain.retrievers.document_compressors import LLMChainExtractor

text_splitter = RecursiveCharacterTextSplitter(
    # Set a really small chunk size, just to show.
    chunk_size=1000,
    chunk_overlap=100,
)

docs = text_splitter.split_documents(documents)
retriever = FAISS.from_documents(
    docs,
    sagemaker_embeddings,
).as_retriever()

compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(
    base_compressor=compressor, base_retriever=retriever
)

compressed_docs = compression_retriever.get_relevant_documents(
    "How was Amazon impacted by COVID-19?"
)

Qalisa i-chain usebenzisa i- ContextualCompressionRetriever nge LLMChainExtractor bese udlulisela umyalo nge- chain_type_kwargs ukuphikisana.

qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=compression_retriever,
    return_source_documents=True,
    chain_type_kwargs={"prompt": PROMPT}
)

Ake sibuze umbuzo:

query = "How did AWS evolve?"
result = qa({"query": query})
print(result['result'])
AWS evolved by starting as a small project inside Amazon, requiring significant capital investment and facing skepticism from both inside and outside the company. However, AWS had a head start on potential competitors and believed in the value it could bring to customers and Amazon. AWS made a long-term commitment to continue investing, resulting in over 3,300 new features and services launched in 2022. AWS has transformed how customers manage their technology infrastructure and has become an $85B annual revenue run rate business with strong profitability. AWS has also continuously improved its offerings, such as enhancing EC2 with additional features and services after its initial launch.

Hlunga amadokhumenti ngesihlungi seketango le-LLM

The I-LLMChainFilter iyi-compressor elula kancane kodwa eqine kakhudlwana esebenzisa iketanga le-LLM ukuze inqume ukuthi yimiphi imibhalo elandiwe ekuqaleni okufanele ihlungwe nokuthi yimiphi okufanele ibuyiswe, ngaphandle kokukhohlisa okuqukethwe kwedokhumenti:

from langchain.retrievers.document_compressors import LLMChainFilter

_filter = LLMChainFilter.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(
    base_compressor=_filter, base_retriever=retriever
)

compressed_docs = compression_retriever.get_relevant_documents(
    "How was Amazon impacted by COVID-19?"
)
print(compressed_docs)

Qalisa i-chain usebenzisa i- ContextualCompressionRetriever nge LLMChainFilter bese udlulisela umyalo nge- chain_type_kwargs ukuphikisana.

qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=compression_retriever,
    return_source_documents=True,
    chain_type_kwargs={"prompt": PROMPT}
)

Ake sibuze umbuzo:

query = "How did AWS evolve?"
result = qa({"query": query})
print(result['result'])
AWS (Amazon Web Services) evolved by initially launching feature-poor but iterating quickly based on customer feedback to add necessary capabilities. This approach allowed AWS to launch EC2 in 2006 with limited features and then continuously add new functionalities, such as additional instance sizes, data centers, regions, operating system options, monitoring tools, load balancing, auto-scaling, and persistent storage. Over time, AWS transformed from a feature-poor service to a multi-billion-dollar business by focusing on customer needs, agility, innovation, cost-efficiency, and security. AWS now has an $85B annual revenue run rate and offers over 3,300 new features and services each year, catering to a wide range of customers from start-ups to multinational companies and public sector organizations.

Qhathanisa imiphumela

Ithebula elilandelayo liqhathanisa imiphumela evela emibuzweni eyahlukene ngokusekelwe kumasu.

Technique Umbuzo 1 Umbuzo 2 Ukuqhathanisa
I-AWS yavela kanjani? Kungani i-Amazon iphumelela?
Okukhiphayo Okujwayelekile kwe-Retriever Chain I-AWS (Izinkonzo Zewebhu ze-Amazon) yavela ekutshaleni imali ekuqaleni okungenanzuzo yaya ebhizinisini lesilinganiso semali etholwa minyaka yonke esingu-$85B elinenzuzo eqinile, enikeza inhlobonhlobo yezinsizakalo nezici, futhi yaba yingxenye ebalulekile yephothifoliyo ye-Amazon. Naphezu kokubhekana nokungabaza kanye nezivunguvungu zesikhashana, i-AWS yaqhubeka nokusungula, iheha amakhasimende amasha, futhi ithutha amakhasimende asebenzayo, ihlinzeka ngezinzuzo ezinjengokusheshisa, ukusungula izinto ezintsha, ukonga izindleko, nokuphepha. I-AWS iphinde yandisa utshalomali lwayo lwesikhathi eside, okuhlanganisa nokuthuthukiswa kwe-chip, ukuze inikeze amakhono amasha futhi iguqule lokho okungenzeka kumakhasimende ayo. I-Amazon iphumelele ngenxa yokuqamba okusha okuqhubekayo kanye nokunwetshwa ezindaweni ezintsha ezifana nezinsizakalo zengqalasizinda yobuchwepheshe, izinto zokufunda ezidijithali, abasizi bomuntu abaqhutshwa ngezwi, kanye namamodeli ebhizinisi amasha njengemakethe yezinkampani zangaphandle. Ikhono layo lokukala imisebenzi ngokushesha, njengoba kubonakala ekwandeni okusheshayo kokugcwaliseka kwayo kanye namanethiwekhi okuthutha, nakho kunomthelela empumelelweni yayo. Ukwengeza, ukugxila kwe-Amazon ekwenzeni ngcono kanye nezinzuzo ezisebenza kahle ezinqubweni zayo kubangele ukuthuthukiswa kokukhiqiza kanye nokwehliswa kwezindleko. Isibonelo se-Amazon Business sigqamisa ikhono lenkampani lokusebenzisa amandla ayo e-e-commerce kanye nempahla emikhakheni ehlukene. Ngokusekelwe ezimpendulweni ezivela ochungechungeni olujwayelekile lokubuyisela, siyaqaphela ukuthi nakuba linikeza izimpendulo ezinde, lihlushwa ukuchichima komongo futhi liyehluleka ukusho noma yimiphi imininingwane ebalulekile evela kukhorasi mayelana nokuphendula umbuzo onikeziwe. Uchungechunge oluvamile lokubuyisa alukwazi ukuthwebula ama-nuances ngokujula noma ukuqonda umongo, okungenzeka kushoda izici ezibalulekile zedokhumenti.
Okukhiphayo Kokubuyiswa Kwedokhumenti Yomzali I-AWS (Izinkonzo Zewebhu ze-Amazon) yaqala ngokwethulwa okungalungile kokuqala kwesevisi ye-Elastic Compute Cloud (EC2) ngo-2006, ihlinzeka ngosayizi owodwa kuphela, endaweni eyodwa yedatha, endaweni eyodwa yomhlaba, nezimo zesistimu yokusebenza ye-Linux kuphela. , futhi ngaphandle kwezici eziningi ezibalulekile njengokuqapha, ukulinganisa ukulayisha, ukukala okuzenzakalelayo, noma isitoreji esiqhubekayo. Kodwa-ke, impumelelo ye-AWS yabavumela ukuthi baphindaphinde ngokushesha futhi bengeze amakhono angekho, ekugcineni banwebeka ukuze banikeze ama-flavour ahlukahlukene, osayizi, kanye nokwenza kahle kwekhompuyutha, ukugcinwa, kanye nenethiwekhi, kanye nokwenza awabo ama-chip (Graviton) ukuze aqhubekisele phambili amanani nokusebenza. . Inqubo yokusungula ephindaphindayo ye-AWS idinga ukutshalwa kwezimali okubalulekile kwezezimali kanye nezinsiza zabantu phakathi neminyaka engu-20, ngokuvamile kusenesikhathi ngaphambi kokuthi izokhokha nini, ukuze kuhlangatshezwane nezidingo zamakhasimende nokuthuthukisa ulwazi lwamakhasimende lwesikhathi eside, ukwethembeka, kanye nembuyiselo yabaninimasheya. I-Amazon iphumelele ngenxa yekhono layo lokuqamba izinto ezintsha njalo, ukuzivumelanisa nezimo zemakethe ezishintshayo, nokuhlangabezana nezidingo zamakhasimende ezigabeni ezahlukahlukene zemakethe. Lokhu kubonakala empumelelweni ye-Amazon Business, ekhule ibiza cishe u-$35B ekuthengisweni kwesamba saminyaka yonke ngokuletha okukhethiwe, inani, kanye nokwenza lula kumakhasimende ebhizinisi. Ukutshalwa kwezimali kwe-Amazon kumakhono we-ecommerce kanye nezinto zokusebenza kuphinde kwavumela ukudalwa kwezinsizakalo ezifana ne-Buy with Prime, esiza abathengisi abanamawebhusayithi aqonde ngqo kumthengi aguqule ukuguqulwa ukusuka ekubukeni kuya ekuthengeni. Isitholi sedokhumenti yomzali sijula ​​ekucacisweni kwecebo lokukhula le-AWS, okuhlanganisa inqubo ephindaphindwayo yokwengeza izici ezintsha ngokusekelwe empendulweni yekhasimende kanye nohambo olunemininingwane olusuka ekuqalisweni kokuntuleka kwesici ukuya endaweni evelele yemakethe, kuyilapho inikeza impendulo enothile. . Izimpendulo zihlanganisa izici eziningi, kusukela ezenzweni ezintsha zobuchwepheshe namasu emakethe kuya ekusebenzeni kahle kwenhlangano nokugxila kwamakhasimende, okuhlinzeka ngombono ophelele wezinto ezinomthelela empumelelweni kanye nezibonelo. Lokhu kungase kubangelwe emandleni okusesha okuhlosiwe okuhlosiwe wedokhumenti yomzali kodwa abanzi.
LLM Chain Extractor: Contextual Compression Output I-AWS yavela ngokuqala njengephrojekthi encane ngaphakathi kwe-Amazon, edinga ukutshalwa kwezimali okukhulu futhi ibhekene nokungabaza okuvela ngaphakathi nangaphandle kwenkampani. Kodwa-ke, i-AWS yaba nesiqalo esikhulu kwabangase baqhudelane nabo futhi ikholelwa enanini engayiletha kumakhasimende nase-Amazon. I-AWS yenze ukuzibophezela kwesikhathi eside ukuqhubeka nokutshala imali, okuholele ezicini namasevisi amasha angaphezu kuka-3,300 aqalwa ngo-2022. I-AWS iguqule indlela amakhasimende aphatha ngayo ingqalasizinda yabo yobuchwepheshe futhi isibe yibhizinisi lesilinganiso semali engenayo esingu-$85B waminyaka yonke elinenzuzo enamandla. I-AWS iphinde yathuthukisa ngokuqhubekayo ukunikezwa kwayo, njengokuthuthukisa i-EC2 ngezici ezengeziwe namasevisi ngemva kokwethulwa kwayo kokuqala. Ngokusekelwe kumongo onikeziwe, impumelelo ye-Amazon ingabangelwa ukunwetshwa kwayo kwamasu ukusuka endaweni yokuthengisa izincwadi kuya endaweni yemakethe yomhlaba wonke ene-ecosystem yabathengisi beqembu lesithathu, ukutshalwa kwezimali kwangaphambi kwesikhathi ku-AWS, ukuqamba okusha ekwethuleni i-Kindle ne-Alexa, kanye nokukhula okukhulu. emalini engenayo yonyaka kusukela ngo-2019 kuya ku-2022. Lokhu kukhula kuholele ekwandeni kwesikhungo sokugcwaliseka, ukwakhiwa kwenethiwekhi yezokuthutha yamamayela wokugcina, kanye nokwakhiwa kwenethiwekhi entsha yesikhungo sokuhlunga, elungiselelwe ukukhiqiza kanye nokwehliswa kwezindleko. I-LLM chain extractor igcina ibhalansi phakathi kokumboza amaphuzu abalulekile ngokugcwele kanye nokugwema ukujula okungadingekile. Ilungisa ngendlela eguquguqukayo kumongo wombuzo, ngakho okukhiphayo kuhambisana ngokuqondile futhi kuyahlanganisa.
Isihlungi se-LLM Chain: Okukhiphayo Kokucindezelwa Kokuqukethwe I-AWS (Izinkonzo Zewebhu ze-Amazon) yavela ngokwethula isici-esimpofu ekuqaleni kodwa iphindaphinda ngokushesha ngokusekelwe kumpendulo yamakhasimende ukuze kungezwe amakhono adingekayo. Le ndlela yavumela i-AWS ukuthi yethule i-EC2 ngo-2006 enezici ezilinganiselwe bese iqhubeka yengeza imisebenzi emisha, efana nosayizi abengeziwe bezibonelo, izikhungo zedatha, izifunda, izinketho zesistimu yokusebenza, amathuluzi okuqapha, ukulinganisa komthwalo, ukukala okuzenzakalelayo, nokugcinwa okuqhubekayo. Ngokuhamba kwesikhathi, i-AWS yashintsha isuka ekubeni yinkonzo embi kakhulu yaba ibhizinisi lezigidigidi zamadola ngokugxila ezidingweni zamakhasimende, ukushesha, ukusungula izinto ezintsha, ukonga izindleko, nokuphepha. I-AWS manje inezinga lokusebenzisa imali engenayo engu-$85B ngonyaka futhi inikeza izici namasevisi amasha angaphezu kuka-3,300 unyaka ngamunye, enakekela inhlobonhlobo yamakhasimende kusukela kwabaqalayo ukuya ezinkampanini zamazwe ngamazwe nezinhlangano zemboni yomphakathi. I-Amazon iphumelele ngenxa yamamodeli ayo ebhizinisi aqanjiwe, ukuthuthuka okuqhubekayo kwezobuchwepheshe, kanye nezinguquko zamasu zenhlangano. Le nkampani ibilokhu iphazamisa izimboni zendabuko ngokwethula imibono emisha, njengenkundla ye-ecommerce yemikhiqizo nezinsizakalo ezahlukahlukene, imakethe yezinkampani zangaphandle, izinsizakalo zengqalasizinda yamafu (AWS), i-Kindle e-reader, kanye nomsizi womuntu siqu oshayelwa ngezwi le-Alexa. . Ukwengeza, i-Amazon yenze izinguquko zesakhiwo ukuze ithuthukise ukusebenza kahle kwayo, njengokuhlela kabusha inethiwekhi yayo yokugcwalisa yase-US ukuze yehlise izindleko nezikhathi zokulethwa, okuphinde kube nesandla empumelelweni yayo. Ngokufanayo ne-LLM chain extractor, isihlungi se-LLM chain siqinisekisa ukuthi nakuba amaphuzu abalulekile emboziwe, okukhiphayo kusebenza kahle kumakhasimende afuna izimpendulo ezimfushane nezimo.

Lapho siqhathanisa lawa masu ahlukene, singabona ukuthi ezimweni ezifana nokuchaza uguquko lwe-AWS ukusuka kusevisi elula ukuya ebhizinisini eliyinkimbinkimbi, lezigidigidi zamadola, noma ukuchaza impumelelo yamasu e-Amazon, iketango elivamile lokubuyisela alinakho ukunemba okuhlinzekwa ngamasu ayinkimbinkimbi, okuholela olwazini olungaqondisiwe kakhulu. Nakuba kumbalwa kakhulu umehluko obonakalayo phakathi kwamasu athuthukisiwe okuxoxwe ngawo, afundisa kakhulu kunamaketanga ajwayelekile okubuyisela.

Kumakhasimende asezimbonini ezifana nokunakekelwa kwezempilo, ezokuxhumana, nezinsizakalo zezezimali abheke ukusebenzisa i-RAG ezinhlelweni zawo zokusebenza, imikhawulo yeketango elijwayelekile lokubuyisela ekunikezeni ukunemba, ukugwema ukungadingeki, kanye nokucindezela ngempumelelo ulwazi kukwenza kungakufanelekeli ukufeza lezi zidingo uma kuqhathaniswa. kusitholi sedokhumenti yomzali esithuthuke kakhulu nezindlela zokuminyanisa ezinomongo. Lawa maqhinga ayakwazi ukucwenga ulwazi oluningi kumininingwane egxilile, enomthelela oyidingayo, kuyilapho esiza ukuthuthukisa ukusebenza kwentengo.

Hlanza

Uma usuqedile ukusebenzisa incwadi yokubhalela, susa izinsiza ozidalile ukuze ugweme ukunqwabelana kwezindleko zezinsiza ezisetshenziswayo:

# Delete resources
llm_predictor.delete_model()
llm_predictor.delete_endpoint()
embedding_predictor.delete_model()
embedding_predictor.delete_endpoint()

Isiphetho

Kulokhu okuthunyelwe, sethule isixazululo esikuvumela ukuthi usebenzise i-retrieter yedokhumenti yomzali nezindlela zochungechunge lokucindezela komongo ukuze uthuthukise ikhono lama-LLM okucubungula nokwenza ulwazi. Sihlole lezi zindlela ezithuthukisiwe ze-RAG nge-Mixtral-8x7B Instruct kanye nezinhlobo ze-BGE Large En ezitholakala nge-SageMaker JumpStart. Siphinde sahlola sisebenzisa isitoreji esiqhubekayo sokushumeka nezingxenyana zamadokhumenti nokuhlanganiswa nezitolo zedatha yebhizinisi.

Amasu esiwenzile awagcini nje ngokwenza ngcono indlela amamodeli e-LLM afinyelela ngayo futhi ahlanganise nolwazi lwangaphandle, kodwa futhi athuthukisa kakhulu ikhwalithi, ukuhambisana, nokusebenza kahle kwemiphumela yawo. Ngokuhlanganisa ukubuyisa okuvela ezinkampanini ezinkulu zombhalo namandla okukhiqiza ulimi, lezi zindlela ezithuthukisiwe ze-RAG zenza ama-LLM akwazi ukukhiqiza izimpendulo eziyiqiniso, ezihambisanayo, nezifanele umongo, zithuthukisa ukusebenza kwazo kuyo yonke imisebenzi ehlukahlukene yokucubungula ulimi lwemvelo.

I-SageMaker JumpStart iphakathi kwalesi sixazululo. Nge-SageMaker JumpStart, uthola ukufinyelela kumamodeli wemithombo evulekile nevaliwe, okwenza kube lula inqubo yokuqalisa nge-ML futhi ivumele ukuhlola okusheshayo nokusetshenziswa. Ukuze uqalise ukusebenzisa lesi sixazululo, zulazulela encwadini yokubhalela ku- GitHub repo.


Mayelana Ababhali

Niithiyn Vijeaswaran unguMklami Wezixazululo kwa-AWS. Indawo agxile kuyo ikhiqiza i-AI kanye ne-AWS AI Accelerators. Uneziqu zeBachelor kuComputer Science neBioinformatics. U-Niithiyn usebenza eduze nethimba le-Generative AI GTM ukuze anikeze amandla amakhasimende e-AWS emikhakheni eminingi futhi asheshise ukwamukela kwawo i-AI ekhiqizayo. Ungumlandeli oshisekayo we-Dallas Mavericks futhi uthanda ukuqoqa amateku.

Sebastian Bustillo unguMklami Wezixazululo kwa-AWS. Ugxile kubuchwepheshe be-AI/ML onothando olunzulu lwe-AI ekhiqizayo kanye nama-accelerator ekhompyutha. Kwa-AWS, usiza amakhasimende avule inani lebhizinisi nge-generative AI. Uma ingekho emsebenzini, ijabulela ukwenza inkomishi yekhofi elikhethekile kanye nokuhlola umhlaba nomkakhe.

U-Armando Diaz unguMklami Wezixazululo kwa-AWS. Ugxile kwi-generative AI, AI/ML, kanye ne-Data Analytics. Kwa-AWS, u-Armando usiza amakhasimende ukuthi ahlanganise amakhono e-AI akhiqizayo asezingeni eliphezulu ezinhlelweni zawo, akhuthaze ukuqamba okusha kanye nethuba lokuncintisana. Uma ingekho emsebenzini, ijabulela ukuchitha isikhathi nomkakhe nomndeni, ukuqwala izintaba, nokuzulazula umhlaba.

UDkt. Farooq Sabir uyiSenior Artificial Intelligence kanye ne- Machine Learning Specialist Solutions Architect kwa-AWS. Uneziqu ze-PhD ne-MS ku-Electrical Engineering azithola e-University of Texas e-Austin kanye ne-MS ku-Computer Science ayithola e-Georgia Institute of Technology. Uneminyaka engaphezu kwe-15 yesipiliyoni somsebenzi futhi uthanda ukufundisa nokweluleka abafundi basekolishi. Kwa-AWS, usiza amakhasimende akhe futhi axazulule izinkinga zebhizinisi labo kwisayensi yedatha, ukufunda ngomshini, umbono wekhompiyutha, ubuhlakani bokwenziwa, ukwenza izinombolo ngokugcwele, kanye nezizinda ezihlobene. Ezinze e-Dallas, eTexas, yena nomndeni wakhe bathanda ukuhamba futhi bahambe ohambweni olude.

UMarco Punio i-Solutions Architect egxile esu elikhiqizayo le-AI, yasebenzisa izixazululo ze-AI futhi yenza ucwaningo ukuze isize amakhasimende alinganise kakhulu ku-AWS. UMarco ungumeluleki wamafu wedijithali onolwazi kuFinTech, Healthcare & Life Sciences, Software-as-a-service, futhi muva nje, ezimbonini zeTelecommunications. Unguchwepheshe oqeqeshiwe onothando lokufunda ngomshini, ubuhlakani bokwenziwa, kanye nokuhlanganisa nokutholwa. UMarco uzinze e-Seattle, WA futhi uthanda ukubhala, ukufunda, ukuvivinya umzimba, nokwakha izinhlelo zokusebenza ngesikhathi sakhe samahhala.

AJ Dhimine unguMklami Wezixazululo kwa-AWS. Ugxile ekukhiqizeni i-AI, i-serverless computing kanye nokuhlaziya idatha. Uyilungu/umeluleki osebenzayo ku- Machine Learning Technical Field Community futhi ushicilele amaphepha ambalwa esayensi ngezihloko ezihlukahlukene ze-AI/ML. Usebenza namakhasimende, kusukela kwabaqalayo kuya kwezamabhizinisi, ukuthuthukisa izixazululo ze-AI ezikhiqizayo ze-AWSome. Uthanda kakhulu ukusebenzisa amamodeli Olimi Olukhulu ukuze ahlaziye idatha ethuthukisiwe kanye nokuhlola izinhlelo zokusebenza ezibhekana nezinselele zomhlaba wangempela. Ngaphandle komsebenzi, i-AJ iyakuthokozela ukuhamba, futhi njengamanje isemazweni angama-53 ngomgomo wokuvakashela wonke amazwe emhlabeni.

indawo_img

Latest Intelligence

indawo_img