您好,欢迎来到华佗小知识。
搜索
您的当前位置:首页Abstract Two-level branch prediction using neural networks

Abstract Two-level branch prediction using neural networks

来源:华佗小知识
JournalofSystemsArchitecture49(2003)557–570

Two-levelbranchpredictionusingneuralnetworks

ColinEgan

a,*,GordonStevena,PatrickQuicka,Rub󰀁enAngueraa,

FleurStevena,LucianVintanbUniversityofHertfordshire,CollegeLane,HatfieldAL109AB,UKbUniversity‘‘LucianBlaga’’ofSibiu,Sibiu-2400,Romania

aAbstract

Dynamicbranchpredictioninhigh-performanceprocessorsisaspecificinstanceofageneraltimeseriespredictionproblemthatoccursinmanyareasofscience.Mostbranchpredictionresearchfocusesontwo-leveladaptivebranchpredictiontechniques,averyspecificsolutiontothebranchpredictionproblem.Analternativeapproachistolooktootherapplicationareasandfieldsfornovelsolutionstotheproblem.Inthispaper,weexaminetheapplicationofneuralnetworkstodynamicbranchprediction.Weretainthefirstlevelhistoryregisterofconventionaltwo-levelpredictorsandreplacethesecondlevelPHTwithaneuralnetwork.Twoneuralnetworksareconsidered:alearningvectorquantisationnetworkandabackpropagationnetwork.Wedemonstratethataneuralpredictorcanachievemispre-dictionratescomparabletoconventionaltwo-leveladaptivepredictorsandsuggestthatneuralpredictorsmeritfurtherinvestigation.

Ó2003ElsevierB.V.Allrightsreserved.

Keywords:Neuralbranchprediction;Two-leveladaptivebranchprediction;Backpropagationnetwork;Learningvectorquantisationnetwork

1.Introduction

Branchinstructionsareamajorbottleneckintheexploitationofinstructionlevelparallelism(ILP).Theproportionofconditionalbranchinstructionsinaprogramisrelativelyhigh.Forexample,ingeneral-purposecodeconditionalbranchesoccurapproximatelyevery5–8instruc-tions[13].Inasimpleprocessor,instructionsfromthesequentialpatharetypicallypre-fetchedfrom

Correspondingauthor.Tel.:+44-1707-284373;fax:+44-1707-284303.

E-mailaddress:c.egan@herts.ac.uk(C.Egan).

*theinstructioncacheinanattempttoensurethatitspipelineisfullyutilised.Amisfetchpenaltyoccurswhenabranchistakenandincorrectin-structionshavebeenfetchedintothepipeline.Thepipelineisthenflushedandreloadedwithin-structionsfromthebranchtarget.Asmorepipe-linestagesareintroduced,themisfetchpenaltyincreasinglydegradesprocessorperformance.Sincethereissuchahighfrequencyofbranchesingeneral-purposecode,itisessentialtoreducetheperformancedegradationbyusingaccuratebranchpredictiontopre-fetchthecorrectinstruc-tionsintothepipeline.Amispredictionpenaltythenonlyoccursifthebranchpredictionmecha-nismincorrectlypredictsthebranchdestination.

1383-7621/$-seefrontmatterÓ2003ElsevierB.V.Allrightsreserved.doi:10.1016/S1383-7621(03)00095-X

558C.Eganetal./JournalofSystemsArchitecture49(2003)557–570

Thebranchpredictionproblem,therefore,consistsoftwosub-problems:firstlygeneratingthecorrectpredictionandsecondlyinthecaseofatakenbranchpredictingthecorrectbranchtarget.

Witheverincreasingissueratesinmultiple-in-structionissue(MII)processorsanddeeperpipe-linestheimpactofabranchmispredictionwillseverelylimitanygainsinprocessorperformance.Veryhighpredictionaccuracyisrequiredbecauseanincreasingnumberofinstructionsarelostbe-foreabranchmispredictioncanbecorrected.Asaresultevenamispredictionrateofafewpercentinvolvesasubstantialperformanceloss.Evena3%mispredictionrate,achievedbycurrentstate-of-thearttwo-leveladaptivebranchpredictors,canhaveaseverelimitingimpactonMIIprocessorperformance[22].Someresearchers[2,22]envisagethatanextgenerationbranchpredictorcouldconsume1Mbyteofthehardwarebudget.Thissuggeststhatcurrentandnearfuturebranchpre-dictors,thoughhighlyeffectiveinpredictionac-curacy,arenotcosteffective.Futurebranchpredictorsmustsustainpredictionaccuracyascloseto100%aspossibleandyetbemorecosteffective.

Ifbranchpredictionistoimproveperformance,branchesmustbedetectedwithinthedynamicin-structionstream,andboththedirectiontakenbyeachbranchandthebranchtargetaddressmustbecorrectlypredicted.Furthermore,alloftheabovemustbecompletedintimetofetchinstructionsfromthebranchtargetaddresswithoutinterrupt-ingtheflowofnewinstructionstotheprocessorpipeline.Aclassicbranchtargetcache(BTC)[13]achievestheseobjectivesbyholdingthefollowinginformationforpreviouslyexecutedbranches:theaddressofthebranchinstruction,thebranchtar-getaddressandinformationonthepreviousout-comesofthebranch.BranchesarethenpredictedbyusingthePCaddresstoaccesstheBTCinparallelwiththeinstructionfetchprocess.Asaresulteachbranchispredictedwhilethebranchinstructionitselfisbeingfetchedfromthein-structioncache.Wheneverabranchisdetectedandpredictedastaken,theappropriatebranchtargetisavailableattheendoftheinstructionfetchcycle,andinstructionscanbefetchedfromthebranchtargetinthenextcycle.Straightfor-wardpredictionmechanismsbasedontheprevioushistoryofeachbranchgiveapredictionaccuracyofaround80–95%[13].Thissuccessrateprovedadequateforscalarprocessors,butisgenerallyregardedasinadequateforsuperscalardesigns.Therequirementforhigherbranchpredictionaccuracyinsuperscalarsystemsandtheavailabil-ityofadditionalsiliconarealedtoadramaticbreakthroughintheearly1990swiththedevel-opmentoftwo-leveladaptivebranchprediction[21,33].Morerecentlytwo-levelbranchpredic-torshavebeenimplementedinseveralcommer-cialmicroprocessors[15,16].However,althoughhighpredictionratesareachievedwithtwo-leveladaptivepredictors,thissuccessisobtainedbyprovidingverylargearraysofpredictioncountersorpatternhistorytables(PHTs).SincethesizeofthePHTincreasesexponentiallyasafunctionofhistoryregisterlength,thecostofthePHTcanbecomeexcessive,anditisdifficulttoexploitalargeamountofbranchhistoryeffectively.Two-leveladaptivebranchpredictorshavetwootherdisadvantages.Firstly,inmostpracticalimple-mentationseachpredictioncounterissharedbetweenseveralbranches.Thereisthereforein-terferenceoraliasingbetweenbranchpredictions.Secondly,largearraysofpredictioncountersre-quireextensiveinitialtrainingbeforetheycanpredictaccurately.Furthermore,theamountoftrainingrequiredincreasesasadditionalbranchhistoryisexploited,furtherlimitingtheamountofbranchhistorythatcanbeexploited.

Finally,somebranchesremainstubbornlyhardtopredict[19,28].Therearetwocases.Theout-comeofsomedatadependentbranchesiseffec-tivelyrandomandthesebrancheswillneverbeaccuratelypredicted.However,itshouldbepos-sibletopredictcertainbranchesthatarecurrentlyhardtopredictmoreaccuratelybyidentifyingnewcorrelationmechanismsandaddingthemtothepredictionprocess.Wesuggestthatneuralpre-dictorsmayprovetobeausefulvehicleforin-vestigatingpotentialnewcorrelationmechanisms.Weemphasisethatmostbranchpredictionre-searchisbasedontwo-leveladaptivebranchpre-dictors,whicharethemselvesbasedontwocloselyrelatedcorrelationmechanisms.Yet,branchpre-dictionisaspecificexampleofageneraltimeseries

C.Eganetal./JournalofSystemsArchitecture49(2003)557–570559

predictionproblemthatoccursinmanydiversefieldsofscience.Itisthereforesurprisingthattherehasnotbeenmorecross-fertilisationofideasbe-tweendifferentapplicationareas.Anotableex-ceptionisapaperbyMudgeÔsgroup[7]thatdemonstratesthatalltwo-leveladaptivepredictorsimplementspecialcasesofthepredictionbypartialmatching(PPM)algorithmthatiswidelyusedindatacompression,speechrecognitionandhand-writingrecognitionproblems.MudgeusesthePPMalgorithmtocomputeatheoreticalupperboundontheaccuracyofbranchprediction.Inalaterpaper,Stevenetal.[27]demonstratethatatwo-levelpredictorcanbeextendedtoimplementasimplifiedPPMalgorithmwitharesultantreduc-tioninthemispredictionrate.Timeseriespredic-tionisalsoanimportantresearchtopicinneuralnetworks(NNs).ItthereforeappearsnaturaltolooktoNNsforafurthercross-fertilisationofideas.

InthispaperweexplorehowNNscanbeusedtodynamicallypredictbranchoutcomesbyfore-castingfuturevaluesofdataseries.OneofourmainresearchobjectivesistouseNNstoidentifynewcorrelationsthatcanbeexploitedbybranchpredictors.Wealsowishtodeterminewhethermoreaccuratebranchpredictionispossibleandtogainagreaterunderstandingoftheunderlyingpredictionmechanisms.Inthispaper,weapplyNNstodynamicbranchpredictiontodemonstratethatNNscanachievethesamepredictionaccu-racyasaconventionaltwo-leveladaptivepredic-tor.Wethereforerestrictourneuralnetworkinputstousingthesamedynamichistoryregister(HR)informationasaconventionaltwo-levelpredictor.Finally,wehopetodesignandevaluatehardwareimplementationsofsimplifiedneuralbranchpredictors,althoughinthisinitialfeasibil-itystudy,wehaveignoredthecostsofimple-mentingtheNNs,assumingthatthepredictionswouldbeproducedintimetobeuseful.Alterna-tively,ourresearchmayleadtothedesignofmoreeffectivetwo-levelbranchpredictors.

WeexplorethesuitabilityoftwoNNs,alearningvectorquantisationnetwork(LVQ)andabackpropagationnetwork,forbranchprediction.Throughtrace-drivensimulation,wedemonstratethatneuralpredictorscanachievesuccessrates

thatarecomparabletoconventionaltwo-leveladaptivepredictors.

2.Relatedwork

2.1.Two-leveladaptivebranchprediction

Mostrecentresearchonbranchpredictionhasfocusedontwo-leveladaptiveprediction[6,11,17,18,21,23,24,33–36].Inatwo-levelpredictor,thefirstlevelconsistsofaHRthatrecordstheout-comeofthelastkbranchesencountered.TheHRmaybeasingleglobalregister(HRg)thatrecordstheoutcomeoflastkbranchesexecutedinthedynamicinstructionstreamoroneofmultiplelo-calHRs(HRl)thatrecordsthelastkoutcomesofeachbranch.Thesecondlevelofthepredictor,knownasthePHT,recordsthebehaviourofabranchduringpreviousoccurrencesofthefirstlevelpredictor.ThePHTtypicallyconsistsofanarrayoftwobitsaturatingcountersthatisindexedbytheHRtoobtaintheprediction.Withak-bitHR,2kentriesarethereforerequiredifaglobalPHTisprovided,ormanytimesthisnumberifseparateHRsandthereforePHTsareprovidedforeachbranch.

Twodistinctpredictiontechniqueshaveinfactbeendeveloped,globalandlocal(per-address).IfaglobalHRisused,thepredictorexploitscorrela-tionbetweentheoutcomeofabranchandtheoutcomeofneighbouringbranchesthatareexe-cutedimmediatelypriortothebranch.IfalocalHRisused,thepredictorexploitscorrelationbe-tweentheoutcomeofabranchandpreviousout-comesofthesamebranch.

Two-levelbranchpredictorsareusuallyclassi-fiedusingasystemproposedbyYehandPatt[34].ThesixmostcommonconfigurationsareGAg,GAp,GAs,PAg,PApandPAs.Theuppercasefirstletterspecifiesthefirst-levelmechanism,‘‘G’’globalor‘‘P’’local(per-address).Thelowercaselastletterspecifiesthesecondlevel,whichcanbe‘‘g’’global,‘‘p’’local(per-address)or‘‘s’’asetofbranchesmappingtothesamePHTpredictionarray.The‘‘A’’inthemiddleemphasisestheadaptiveordynamicnatureofthepredictor.Therefore,GAg,GApandGAsrelyonglobal

560C.Eganetal./JournalofSystemsArchitecture49(2003)557–570

branchhistoryandPAg,PApandPAsrelyonlocalbranchhistory.AdditionallyaseparateBTCisstillrequiredtoprovidebranchtargetaddresses.InthecaseofalocalpredictorthelocalHRscanbeintegratedintotheBTCbyaddingaHRfieldtoeachBTCentry.

Sincemostrecentresearchonbranchpredictionhasconcentratedontwo-leveladaptivetechniques,itisusefultoexploresomeofthedrawbacksoftwo-levelpredictors.Themaindisadvantagescanbefoundinthefollowingareas:•HighcostofPHT•Branchinterference•Slowinitialisation

Thehighcostoftwo-levelpredictorsisadirectresultofthesizeofthesecondlevelPHT,whichincreasesexponentiallyinsizeasafunctionofHRlength.Thehighimplementationcostofconven-tionaltwo-levelpredictorshashadasubtle,butimportant,impactonbranchpredictionresearch;ithasdiscouragedanydevelopmentsthatincreasethesizeofthePHT.Perhapsthemostobviousexampleisthatresearchersaredeterredfromat-temptingtoextractadditionalpredictionaccuracyfromverylongHRs.Similarly,researchersarediscouragedfromdescribingtheprogrampathleadingtoeachbranchmoreaccuratelybyre-cordingfullerpathinformation[20],andfromcombiningglobalandlocalhistoryinformationinasinglepredictor.Theuseofadditionalpre-dictioninformation,suchasthebranchdirectioninformationusedinthesimplebackwardstaken,forwardnottaken(BTFNT)heuristic,isdiscour-aged.

ThehighcostofaconventionalPHTsuggeststhatalternativeconfigurationsshouldbeconsid-ered.OnepossibilityistoreplacethePHTwithapredictioncache[8–10,27].AlthoughatagfieldmustbeaddedtoeachPHTentry,theverylargesizeofconventionalPHTssuggeststhatthetotalnumberofentriesandthereforethetotalcostofthePHTcouldbesignificantlyreduced.Thedan-gerwithapredictioncacheisthatcachemisseswillincreasethemispredictionrate.Theimpactofpredictioncachemissescan,however,bemini-mised,atlowcost,byrestoringatwo-bitpredic-

tioncountertoeachentryoftheconventionalBTC,whichisstillrequiredinalltwo-levelpre-dictorstofurnishthetargetaddressforeachbranch.TheseBTCcounterscanthenbeusedtoprovideadefaultpredictionwheneverthereisamissinthepredictioncache[8].Alternatively,anentirelydifferentapproach,suchastheneuralbranchpredictorsintroducedinthispaper,canbeinvestigated.

Inaglobalpredictor,interferenceoraliasinginthesecondlevelPHToccurswhenevertwobran-chesgeneratethesamefirst-levelHRpatternandthereforeaccessthesamesecond-levelPHTpre-dictioncounter[23].Thiscausesthepredictionsfortwoormorebranchestoaffecteachother.Therehavebeennumerousstudiesthatattempttoreducetheimpactofinterference[5,10,17,18,24].How-ever,reducingtheamountofinterferencehasgenerallyalsoincreasedtheinitialisationmispre-dictionsandthecostofthepredictor.

ThethirdproblemisPHTinitialisation.Intheworstcase,the2kpredictioncountersassociatedwitheachbranch,wherekisthelengthoftheHR,mustbetrainedbeforethepredictorisfullyeffective.EvenallowingforthefactthataPHTiseffectivelyasparsematrixwithmanyun-usedentries,thissituationcontrastssharplywithaclassicBTCthatisfullyinitialisedafteroneexe-cutionofeachbranch.TheimpactofPHTtrainingcanbereducedbycombiningatwo-levelpredic-torwithaclassicBTCinahybridpredictor[6,26].TheBTCwillthenbeusedinpreferencetothetwo-levelpredictorwhilethelatterisbeinginitia-lised.

2.2.Neuralbranchprediction

Calderintroducedideasfromartificialintelli-gencetobranchpredictionbyusinganeuralbranchpredictortoderivestaticbranchpredic-tions[3].HetermedthisnewtechniqueÔEvidence-basedStaticPredictionÕ(ESP).However,Calderwasconcernedentirelywithstaticorcompile-timebranchprediction.HispredictionswerethereforebasedoninformationaboutaprogramÕsstructurethatwasreadilydeterminedbyacompiler.Forexample,abranchsuccessorpaththatleadsoutofalooporfunctionislesslikelytobefollowedthan

C.Eganetal./JournalofSystemsArchitecture49(2003)557–570561

apaththatremainswithinthelooporfunction.TheideaofCalderÕsneuralbranchpredictorwastomapstaticfeaturesassociatedwitheachbranchtotheprobabilitythatthebranchwillbetaken.Thisapproachprovidesseveraladvantagesoverthetraditionalstaticbranchpredictionprogram-basedheuristics[1].First,thetechniquecanbeappliedtoalargerangeofprogramminglan-guages,compilersandarchitectures,sincetheneuralnetwillperformadynamicmappingofstaticfeaturestotheprobabilitythatthebranchwillbetaken.Second,sincethetechniqueusesartificialintelligencealgorithms,themostusefulheuristicsareautomaticallyappliedtoderivethepredictionincaseswheremorethanoneheuristicapplies,makingitunnecessarytoorderheuristics.Calderachievesamispredictionrateof20%,re-markablylowforstaticbranchprediction.SinceCalderÕspredictionswereperformedatcompiletime,hewasunabletofeedthedynamicbranchhistoriesusedbytwo-levelpredictorsintohisNNs.Asaresult,perhapsthemostusefulcontributionofhispaperistosuggestawiderangeofalterna-tiveinputsthatmightcorrelatewithbranchout-comesandwhichmightthereforebeusefullyaddedtodynamicpredictors.Jim󰀁enez,independentlyandsimultaneouslytoourinvestigationintothepotentialofapplyingtheprinciplesofNNstodynamicbranchprediction,hasalsoinvestigatedneuralmethodsfordynamicbranchprediction[14].Hisstudyfocusesonaperceptronpredictor;wealsofocusontwoper-ceptronpredictors:asimplelearningvectorquantisation(LVQ)neuralpredictorandaback-propagationneuralnetworkpredictor.ThemajordifferencebetweenthetwostudiesisthatJim󰀁enezonlyusesthehistoryregisterasinputvaluestohisperceptronpredictor,whereasweusethehistoryregisterandthebranchaddressasinputvaluestoourLVQandbackpropagationneuralpredictors.Furthermore,Jim󰀁enezclaimstohavedevelopedthefirstperceptronpredictorthatsuccessfullyusesNNs.Asfarasweareaware,thefirstknownperceptronpredictorwasdevelopedbyVintan[29–32].

TheperceptronandtheAdeline[12]aretwoofthesimplestmodelsusedbyNNsforpatternrec-ognition.Aperceptronnetconsistsofasingle

layerofneurons.Eachoftheinputsisconnectedtoeachneuronbyone-waydataconnections,andeachoftheseconnectionshasanassociatedweight.Thesizeoftheweightisappliedtoitsconnectiontodetermineasingleoutputsignal.Additionally,eachneuronappliesathresholdac-tivationfunctiontoitsinputsignals,knownasthebiasweight.Itisimportanttousebipolarinputsignals()1,1)ratherthanbinaryinputsignals(0,1)becauseaweightwithaninputsignalof0wouldhavenoimpactonthenetwork.Theoutputsignalisthesumofbiasweightwiththesumma-tionoftheproductofeachinputsignalanditsassociatedweight.

outputsignal¼w0þX

nwiÃinputsignali

i¼1

Inthecaseofbranchprediction,apredictionisconsideredtobetakeniftheoutputsignalisP0andapredictionisconsiderednot-takeniftheoutputsignalis<0.BothJim󰀁enezandVintanretainedthefirstlevelhistoryregisterofatwo-levelpredictortosupplyinputsignalstotheirperceptronpredictors.However,VintanÕsinputsignalsconsistedofthebranchaddressaswellasthehistoryregister.Inbothstudiestheneuralnetworkwasdynamicallytrainedaftereachbranchprediction.Vintanap-pliedthewell-knowntrainingalgorithmthatisusuallyappliedtothebackpropagationlearningalgorithm[12].Thebackpropagationlearningal-gorithmhastwosteps.Thefirststepistheforwardpropagationstepthatcomputestheweightedsumsandactivationsforeachinputvalue.Thesecondstepisusedtoupdatetheweightforeachinputvalueandisabackwardpassthroughtheneuralnetwork.Incontrast,Jim󰀁enezusedasimplertrainingalgorithm.Inhispredictoraninputvaluetothenetworkwaseither)1or+1.Iftheinputvaluewasthesameasthepreviousbranchout-come,whereanot-takenbranchisassociatedwith)1andatakenbranchassociatedwithaþ1,thentheweightwasincremented.Conversely,iftheinputvaluewasthenotsameastheprevi-ousbranchoutcomethentheweightwasdecre-mented.

Bothresearchersconcludethatgreatercorrela-tionsareachievedbyneuralpredictorsthanby

562C.Eganetal./JournalofSystemsArchitecture49(2003)557–570

two-levelpredictorsandgreaterpredictionaccu-racycanbeachieved.Jim󰀁enezshowedthathispredictorachievedamispredictionrateof1.71%,whichequatesto36%fewermispredictionsthanaMcFarlingstylehybridtwo-levelpredictor[18].Vintanshowedthathispredictorachievedamis-predictionrateofabout11%,whichequatesto3%improvementinthemispredictionrateforhisneuralpredictoroveraconventionaltwo-levelpredictor.ThedifferenceinthepredictionaccuracybetweenJim󰀁enezandVintanstudyÕsmaybeex-plainedbythefactthatJim󰀁enezfocusedhisstudyonhistoryregisterlengthsasinputvaluesforhisperceptronpredictorandwasabletoachievecor-relationsforhistoryregisterlengthsupto66bits.Incontrast,Vintanusedacombinationofthebranchaddressandthehistoryregisterlengthasinputvaluesintohismultiplayerlayerpercep-tronpredictor,whichrestrictedhismaximumhis-toryregisterlengthof10bits.Furthermore,thedifferenceinthetrainingalgorithmsmaybeacriticalfactorindeterminingthebehaviouroftheNNs.

3.Branchpredictionmodels

Wefirstbrieflyconsidertheperformanceofa

simplelearningvectorquantisation(LVQ)neuralpredictor.Wethencomparetheperformanceofconventionaltwo-leveladaptivepredictorswithaneuralpredictorusingabackpropagationnet-work.Inbothcasesthepredictionprocessisbasedontwoinputs:thebranchPCÕs10leastsignificantbitsandtheHRg,HRloracombinationofHRgandHRlofthekpreviousbranches.

Weassumethatallpredictionsarebeingmadeduringtheinstructionfetch(IF)stageofthepro-cessorpipeline.Allourpredictorsthereforeoper-ateinparallelwithaclassicBTCthatdetectsbranchesandfurnishesthebranchtargetaddress.SinceaninstructionisnotdecodeduntiltheIDstageofthepipeline,thenonlyhitsintheBTCareknowntobebranchinstructions.Theactualpre-dictionisgeneratedbyeitheraneuralnetworkoratwo-levelpredictor.Nevertheless,amissinaBTCalwaysresultsinadefaultpredictionofnot-taken,irrespectiveofthepredictiondeliveredbythe

predictor,sincetheinstructionisnotnecessarilyabranch.A1Kfour-waysetassociativeBTCisusedthroughoutthepaper.3.1.AnLVQneuralpredictor

ThefirstneuralnetworkweexaminedwasanLVQ[12]model.Ourobjectivewastodeterminewhetherrespectablesuccessratescouldbedeliv-eredbyasimpleLVQnetworkthatwasdynami-callytrainedaftereachbranchprediction.

TheLVQpredictorcontainstwo‘‘codebook’’vectors:thefirstvector,Vt,isassociatedwiththebranchtakeneventandthesecond,Vnt,withthenot-takenevent.VtisinitialisedtoallonesandVnttoallzeros.Duringthepredictionprocess,theinputparametersofthebranchaddress(10leastsignificantbits)andthekbitsoftheHRarecon-catenatedtoformasingleinputvector,X.Modi-fiedHammingdistancesarethencomputedbetweenXandthetwocodebookvectors.

HD¼XðXiÀViÞ

2

i

ThevectorwiththesmallestHammingdistanceisdefinedasthewinningvector,Vw,andisusedtopredictthebranch.AwinforVtthereforeindicates‘‘predicttaken’’,whileawinforVntindicates‘‘predictnot-taken’’.Whenthebranchoutcomeisdetermined,thecodebookvectorVwthatwasusedtomakethepredictionisthenadjustedasfollows:Vwðtþ1Þ¼VwðtÞþ=ÀaðtÞ½XðtÞÀVwðtÞ󰀈Toreinforcecorrectpredictions,thevectorisin-crementedwheneverapredictionwascorrectanddecrementedotherwise.ThefactoraðtÞrepresentsthelearningfactorandisusuallysettoasmallconstantlessthan0.1.Incontrast,thelosingvec-torisunchanged.Theneuralpredictorwillthere-forebetrainedcontinuouslyaseachbranchisencountered.Itwillalsobeadaptivesincethecodebookvectorswillalwaystendtoreflecttheoutcomeofthebranchesmostrecentlyencoun-tered.InourLVQpredictors,themostrecentlyencounteredbranchesmaybeglobal,localoracombinationofglobalandlocaldependingontheHRinformationusedasinputsignals.

3.2.Branchpredictionusingabackpropagationneuralnetworkpredictor

Oursecondneuralnetworkisabackpropaga-tionneuralnetwork[4].Thepredictioninforma-tionisfedintoabackpropagationnet(Fig.1)whichpredictstheoutcomeofeachbranch.Laterwhentheoutcomeofthebranchisknown,theerrorinthepredictionisbackpropagatedthroughtheneuralnetworkusingtheclassicbackpropa-gationalgorithm.

Inabackpropagationneuralnetworktherearetwostepsthroughthenetwork[4].Thefirststepisaforwardsweepfromtheinputlayertotheoutputlayer,andthenthereisabackwardstepfromtheoutputlayertotheinputlayer.Theforwardsteppropagatestheinputvectorofthenetintothefirstlayer.Outputsfromthislayerproduceanewvector,whichisusedastheinputintothesecondlayer.Thisprocedureisrepeatedthroughtothefinallayer.Theoutputsofthefinallayeraretheoutputsignalsofthenetwork.Inthecaseofaneuralbranchpredictorthereisonlyoneoutputsignal,whichisusedastheprediction.Theback-wardstepissimilartotheforwardstep,exceptthaterrorvaluesarepropagatedbackthroughthenetwork.Theseerrorvaluesareusedtodeterminehowtheweightsarechanged.Thebackpropaga-tionalgorithmisthereforeusedtogeneratetheweights.Inputsandoutputsofabackpropagationalgorithmmaynotberestrictedtodiscretevalueswhichresultsinawiderrangeofwaystocodifyinputsandoutputs.

Fourdifferentnetworkshavebeendeveloped:twoofthemuseglobalhistoryinformationwhiletheothertwouselocalhistoryinformation.Thetwoversionsofeacharisebecausetheinputstothenetarecodedintwodifferentways:binaryusing0and1fornot-takenandtakenbranchesrespec-tively,andbipolar,using)1and1.Toexploitthesedifferentinputencodings,twodifferentactivationfunctionsarealsorequired,asigmoidalfunctionforbinaryinputsandabipolarsigmoidalfunctionforbipolarinputs:

Sigmoidalfunction:1=ð1þeÀbxÞ

Bipolarsigmoidalfunction:2=ð1þeÀbxÞÀ1Thefactorbcontrolsthedegreeoflinearityinthetwoactivationfunctions.Inparticular,asbap-proachesinfinity,thefunctionsbecomestepfunc-tions,theformoftheactivationfunctionusedinmulti-layerperceptron(MLP)networks.

Whenpredictingabranchusingbinaryinputs,avaluegreaterthanorequalto0.5ontheoutputcellisconsideredtobeatakenprediction,whereasanyvaluelowerthan0.5isanot-takenprediction.Inthebipolarcase,positivevaluesor0indicateatakenpredictionandnegativevaluesnot-taken.Thenetworkisnotinitiallytrained,sorandomvaluesarechosenfortheweightsbetween[À2=X,2=X],whereXcorrespondstothenumberofinputstothenet.Thisselectionofweightsguaranteesthatboththeweightsandtheiraveragevaluewillbeclosetozeroandthatthenetisnotbiasedinitiallytowardseithertakenornot-taken.

5C.Eganetal./JournalofSystemsArchitecture49(2003)557–570

4.Trace-drivensimulationresults4.1.Simulationenvironment

OursimulationsusedtheStanfordintegerbenchmarksuite,acollectionofeightCprogramsdesignedtoberepresentativeofnon-numericcode,whileatthesametimebeingcompact.Thebenchmarksarecomputationallyintensive,withanaveragedynamicinstructioncountof273,000.About18%oftheinstructionsarebranchesofwhicharound76%aretaken.Someofthebran-chesinthesebenchmarksareknowntobepartic-ularlydifficulttopredict;seeforexampleMudgeÕsdetailedanalysis[19]ofthebranchesinquicksort.BranchpredictionresearchersgenerallyusetheSPECbenchmarksuitebutweconsiderthattheSPECbenchmarksuitemaynotnecessarilybethemostsuitablebenchmarksforbranchpredic-tion.TheprogramsintheStanfordintegerbench-marksuiteareshorterthanthoseintheSPECbenchmarksuite.ThismeansthateachbranchintheStanfordintegerbenchmarksuiteisexecutedfewertimesthaneachbranchintheSPECbench-marksuite.BranchesintheStanfordbenchmarksuitearethereforemoredifficulttopredictbecausetheinitialtrainingproblemsaremoreacutethanthoseintheSPECbenchmarksuiteandthuspro-videabettertestofpredictionaccuracy.

TheStanfordbenchmarkswerecompiledusingaCcompilerdevelopedattheUniversityofHertfordshirefortheHatfieldsuperscalararchi-tecture(HSA)[25].InstructiontraceswerethenobtainedusingtheHSAinstruction-levelsimula-tor,witheachtraceentryprovidinginformationonthebranchaddress,branchtypeandtargetaddress.Thesetraceswereusedtodriveaseriesoftrace-drivenbranchpredictors.Thetrace-drivensimulatorsarehighlyconfigurable,themostim-portantparameterbeingthenumberofHRbits.Asoutput,thesimulatorsgeneratetheoverallpredictionaccuracy,thenumberofincorrecttargetaddressesandotherusefulstatistics.4.2.Conventionaltwo-levelpredictors

Forcomparativepurposes,wefirstsimulatedthreeglobalpredictors,aGAgpredictor,aGAs

predictorwith16PHTsandaGAppredictor(Fig.2).TheaveragemispredictionrateinitiallyfallssteadilyasafunctionofglobalHRlength,beforeflatteningoutatamispredictionrateofaround9.5%.Ingeneral,thereisnobenefitinin-creasingtheHRlengthbeyond16bitsfortheGAgpredictorand14bitsfortheGAs/GAppredictors.BeyondthispointthereiseithernosignificantbenefitfromnewcorrelationsoranybenefitisnegatedbytheadditionalnumberofinitialisationsrequiredinthePHTs.

Wealsosimulatedthreelocalpredictors:aPAg,aPAsandaPAppredictor(Fig.3).Thelocalpredictorsachievemispredictionratesofaround7.5%,significantlybetterthantheglobalpredic-tors.Thebestperformanceof7.48%isachievedwithaPAppredictoranda28-bitHR.Thisim-provementislargelyachievedbecauselocalpre-dictors,unliketheirglobalcounterparts,continuetobenefitfromadditionalHRbits.However,withbothglobalandlocaltwo-levelpredictorstheac-curacydoesnotimprovesmoothlyasafunctionofHRlength.

4.3.AnLVQbranchpredictor

Globalpredictorsperformbetterwithsomebranches,whilelocalpredictorsperformbetterwithothers.Itisthereforehighlydesirabletocombinebothformsofpredictionwithinasinglepredictor.Unfortunately,combiningbothglobalandlocalhistoryregisterswithinaconventionaltwo-levelpredictorisverycostlysinceboththesizeandcostofthePHTsincreaseexponentiallyasafunctionofhistoryregisterlength.Incontrast,itisveryeasytocombineglobalandlocalhistoryin-formationinaneuralpredictorsincethereisnoexplosivecostincreasecorrespondingtotheex-ponentialgrowthofthePHTsize.

ThreeLVQpredictorswereconsideredwiththefollowinginputs:•PC+HRl•PC+HRg

•PC+HRl+HRg

Theinputvectorfortheneuralnetworkwasconstructedbyconcatenatingthe10leastsignifi-

cantbitsofthePCwithHRg,HRloracombi-nationofHRg+HRlasappropriate.InitiallythevaluesofthelearningstepaðtÞwerevariedbetween0.1and0.001.EventuallythevalueaðtÞ¼0:01wasstandardisedafterithadbeendemon-stratedthatthepredictorwaslargelyinsensitiveto

slightvariationsinaðtÞ.ThesimulationresultsarepresentedinFig.4.

TheglobalLVQpredictorachievedanaveragemispredictionrateof13.54%.Furthermore,onlymodestfurtherbenefitswererealisedbyincreasingHRbeyondfourbits.TheglobalpredictorwasthereforeunabletobenefitfromlargeamountsofHRginformation.ThelocalLVQpredictorachievedasignificantlylowermispredictionrateof10.91%.However,althoughthisfigurewasre-cordedwitha30-bitHRl,thelocalpredictorwasalsounabletoadapttoalargeamountofhistoryregisterinformation,andnosignificantimprove-mentswereobservedwithHRsizesgreaterthan6–10bits.Thesuperiorperformanceofthelocalpredictorisnotentirelyunexpected;thereislikelytobefarmorepositivere-enforcementofpredic-tioninformationbetweendistinctbranchesinalocalpredictor.Incontrast,aglobalpredictormust‘‘learn’’abouteachbranchseparately.

Finally,ahybridglobalandlocalLVQpredic-tor,withequalnumbersofHRgandHRlbits,yieldedamarginalimprovementonthelocalLVQpredictorlevels,pushingthebestaveragemispre-dictionratedownto10.78%(seeFig.4).

Overall,theresultsoftheLVQpredictorareinlinewiththeaverageaccuracyof88.10%achieved

byaclassicBTCwiththesebenchmarks.However,globalconventionaltwo-levelpredictorsandlocalconventionaltwo-levelpredictorsachievesuperiorperformancetoourLVQpredictors.OursimpleLVQpredictorsarethereforeunabletocompetewithconventionaltwo-leveladaptivepredictors.Nevertheless,wefoundthesefirstresultsveryen-couraging.AnLVQnetworksolvesabinaryclassificationproblembyattemptingtofindasinglemulti-dimensionalplanethatdividestheinputspace,inthiscaseatakenandanot-takenspace,intotwo.Althoughtheplanecanbechan-geddynamicallyaseachbranchexecutes,itap-pearsunlikelythatanentirelysatisfactorysolutioncanbefound.SinceourLVQpredictorsnonethe-lessmanagedtoequaltheperformanceofaclassicBTC,wewereencouragedtodevelopfurtherneuralpredictors.

4.4.Abackpropagationneuralpredictor

Atotaloffourbackpropagationneuralpre-dictorsweresimulated:

•Aglobalbackpropagationneuralpredictor,usingPC+HRg,withbinaryinputs.

•Aglobalbackpropagationneuralpredictor,usingPC+HRg,withbipolarinputs.

•Alocalbackpropagationneuralpredictor,usingPC+HRl,withbinaryinputs.

•Alocalbackpropagationneuralpredictor,usingPC+HRg,withbipolarinputs.WedidnotsimulateabackpropagationneuralpredictorthatusesacombinationofHRgandHRl.ThesimulationresultsareplottedinFig.5asafunctionofHRlength.Alearningrateof0.125wasusedthroughout.

Theglobalbackpropagationneuralpredictorwithbinaryinputs(0or1)achievesamispredic-tionrateof11.28%,whichissignificantlybetterthantheglobalLVQpredictor.Howevertheglo-balbackpropagationpredictorwithbipolarinputs()1,+1)significantlyimprovesonthisfigureandachievesamispredictionrateof8.77%.Intuitively,feedinginnot-takenresultsasminusoneallowsthebackpropagationneuralpredictortoexploitcorrelationsbetweennot-takenbranchesandthebranchbeingpredicted.Incontrast,ifnot-takenresultsarefedinaszero,theycanhavelittledi-rectresultonthefinaloutcome,sincetheirweightedinputintoeachintermediateneuralcellmustalwaysbezero.Interestingly,thepredic-tionaccuracyalsocontinuestoimproveasHRlengthisincreased,andonlyfinallydipsbelowa

mispredictionrateof9%withaHRlengthof26bits.

Thelocalbackpropagationneuralpredictorconsistentlyoutperformstheglobalbackpropaga-tionneuralpredictor.Withbinaryinputs,thebestmispredictionrateis10.46%,whilewithbipolarinputs8.47%isachieved.Significantly,backprop-agationneuralpredictionperformanceisnowcomparablewiththeperformanceofconventionaltwo-leveladaptivepredictors(seeFig.6).Thebestglobalbackpropagationneuralpredictorwithamispredictionrateof8.77%is5.2%betterthanthebestGAspredictor,whilethebestlocalback-propagationpredictorat8.47%is13.2%worsethanthebestPAspredictor(seeFig.6).

5.Conclusions

Inthisstudy,wesoughttodeterminewhetheraneuralnetworkcouldmimicatwo-leveladaptivebranchpredictorandachievecomparablesuccessrates.Twotypesofneuralpredictorsweresimu-lated,anLVQpredictorandabackpropagationpredictor.WhiletheLVQpredictoronlyachievedresultscomparabletoatraditionalBTC,thebackpropagationpredictorperformancewascom-parabletoconventionaltwo-leveladaptivepre-dictors.Inthecaseofglobalpredictors,thebest

neuralpredictorwasmarginallysuperiortothebestconventionaltwo-levelpredictor.Inthecaseoflocalpredictors,thebestconventionaltwo-levelpredictorwasmarginallysuperiortothebestneuralpredictor.TheseresultssuggestthatnotonlycanNNsgeneraterespectablepredictionre-sults,butinsomecircumstancesaneuralpredictormaybeabletoexploitcorrelationinformationmoreeffectivelythanaconventionalpredictor.Traditionally,NNsundergoexhaustiveandoftenverytime-consumingtrainingbeforetheyareused.Inthisrespect,branchpredictionappearstobeanunpromisingapplicationforNNs.Inbranchprediction,aneuralnetworkisexpectedtosampletheoutcomeofaspecificbranchonceandtothenpredictthesamebranchwhenitisencounteredforasecondtime.Themostexcitingresultofthesesimulationsisthereforetheextenttowhichback-propagationNNsareabletoassimilateandbenefitfromlargeamountsofhistoryregisterinformationwithaminimumoftraining.Thedistinctdropinthebipolarbackpropagationmispredictionratewhen26bitsofHRareusedisagoodillustrationofthisresult.OurresultsthereforesuggestthatNNscanadaptrapidlyenoughtobesuccessfullyusedindynamicbranchprediction.

NNsprovideanextremelyinterestingtopicforfuturebranchpredictionresearch.Onechallengeis

toconstructcompositeinputvectorsforneuralnetworkpredictorsthatwillenablethemtoout-performconventionalpredictors.Thistaskinvolvesbothidentifyingnewcorrelationmecha-nismsthatcanbeexploitedbyneuralpredictionandtailoringtheinputinformationtofullyexploitthecapabilitiesofaneuralpredictor.

Workisongoingtodesignhardwareimple-mentationsoftheneuralmodelssimulated,anditseemsfeasibletobuildneuralnetworkpredictorsthatworkquicklyenoughtobeusefulwithinasensiblesiliconbudget.Forexample,bycodingthe+1/)1signalsas0/1andtheweightsandthresh-oldsassmallfixed-pointsignedfractions,theapparentlyexpensivemultiplicationsandfloating-pointarithmeticbecomemuchsimplerandfasteroperations.

References

[1]T.Ball,J.Larus,BranchPredictionforFree,ProceedingsoftheSigPlan93ConferenceonProgrammingLanguageandImplementation,June1993,pp.300–313.

[2]D.Burger,J.R.Goodman,Billion-transistorarchitectures,IEEEComputer(September)(1997)46–49.

[3]B.Calder,D.Grunwald,D.Lindsay,Corpus-basedstaticbranchprediction,SIGPLANNotices,June1995,pp.79–92.

C.Eganetal./JournalofSystemsArchitecture49(2003)557–570

569

[4]R.Callan,TheEssenceofNeuralNetworks,Prentice-Hall,1999.

[5]P.Chang,E.Hao,T.Yeh,Y.Patt,BranchClassification:ANewMechanismforImprovingBranchPredictorPerformance,Micro-27,SanJose,California,November1994,pp.22–31.

[6]P.Chang,E.Hao,Y.N.Patt,AlternativeImplementationsofHybridBranchPredictors,Micro-29,AnnArbor,Michigan,November1995,pp.252–257.

[7]I.K.Chen,J.T.Coffey,T.Mudge,AnalysisofBranchPredictionviaDataCompression,Proceedingsofthe7thInternationalConferenceonArchitecturalSupportforProgrammingLanguagesandOperatingSystems(ASP-LOSVII),Cambridge,MA,USA,October1996,pp.128–137.

[8]C.Egan,DynamicBranchPredictioninHighPerformanceSuperscalarProcessors,PhDthesis,UniversityofHert-fordshire,August2000.

[9]C.Egan,G.B.Steven,W.Shim,L.Vintan,Applyingcachingtotwo-leveladaptivebranchprediction,in:DigitalSystemsDesignArchitectures,MethodsandTools,War-saw,Poland,September2001,pp.186–193.

[10]C.Egan,G.B.Steven,L.Vintan,CachedTwo-level

AdaptiveBranchPredictorswithMultipleStages,LectureNotesinComputerScience(LNCS)TrendsinNetworkandPervasiveComputing(ARCS-2002),Karlsruhe,Ger-many,April2002,pp.179–191.

[11]M.Evers,S.J.Patel,R.S.Chappell,Y.N.Patt,AnAnalysis

ofCorrelationandPredictability:WhatmakesTwo-levelBranchPredictorsWork,ISCAÔ25,Barcelona,Spain,June1998,pp.52–61.

[12]S.I.Gallant,NeuralNetworksandExpertSystems,MIT

Press,1993.

[13]J.L.Hennessy,D.A.Patterson,ComputerArchitecture:A

QuantitativeApproach,thirded.,MorganKaufmannPublishers,2002.[14]D.A.Jim󰀁enez,C.Lin,DynamicBranchPredictionwith

Perceptrons.Proc.ofthe7thInternationalSymposiumonHighPerformanceComputerArchitecture(HPCA-7),Monterrey,NL,Mexico2001,pp.197–296.

[15]IntelCorporation.ATourofthePentiumâProProcessor

Microarchitecure.http://www.intel.com/procs/p6/p6white/p6white.htm.

[16]R.E.Kessler,TheAlpha212Microprocessor,IEEE

Micro(March)(1999)24–36.

[17]C.C.Lee,I.-C.K.Chen,T.N.Mudge,TheBi-Mode

BranchPredictor,Micro-30,ResearchTrianglePark,NorthCarolina,December1997,pp.4–13.

[18]S.McFarling,CombiningBranchPredictors,WRLTech-nicalNote,TN36,DEC,June1993.

[19]T.N.Mudge,I.Chen,J.Coffey,LimitsofBranch

Prediction,TechnicalReport,ElectricalEngineeringandComputerScienceDepartment,TheUniversityofMichi-gan,AnnArbor,Michigan,USA,January1996.

[20]R.Nair,DynamicPath-BaseBranchCorrelation,Micro-28,AnnArbor,Michigan,November1995,pp.15–23.

[21]S.Pan,K.So,J.T.Rahmeh,ImprovingtheAccuracyof

DynamicBranchPredictionUsingBranchCorrelation,ASPLOS-V,Boston,October1992,pp.76–84.

[22]Y.N.Patt,S.J.Patel,D.H.Friendly,J.Stark,Onebillion

transistors,oneuniprocessor,onechip,IEEEComputer1(September)(1997)51–57.

[23]S.Sechrest,C.Lee,T.Mudge,TheRoleofAdaptivityin

Two-LevelBranchPrediction,Micro-29,AnnArbor,Michigan,November1995,pp.2–269.

[24]E.Sprangle,R.S.Chappell,M.Alsup,Y.N.Patt,The

AgreePredictor:AMechanismforReducingNegativeBranchHistoryInterference,ISCAÕ24,Denver,Colorado,June1997,pp.284–291.

[25]G.B.Steven,D.B.Christianson,R.Collins,R.Potter,F.L.

Steven,Asuperscalararchitecturetoexploitinstructionlevelparallelism,MicroprocessorsandMicrosystems20(7)(1997)391–400.

[26]G.B.Steven,C.Egan,P.Quick,L.Vintan,ReducingCold

StartMispredictionsinTwo-levelAdaptiveBranchPredictors,CSCS-12,Bucharest,Romania,May1999,pp.145–150.

[27]G.B.Steven,C.Egan,L.Vintan,ACostEffectiveCached

CorrelatedTwo-levelAdaptiveBranchPredictor,18thIASTEDInternationalConferenceinAppliedInformatics,Innsbruck,Austria,February2000.

[28]L.N.Vintan,C.Egan,ExtendingCorrelationinBranch

PredictionSchemes,Euromicro99,Vol.1,Milan,Italy,September1999,pp.441–448.

[29]L.Vintan,PredictingBranchesthroughNeuralNetworks:

anLVQandanMLPApproach,TechnicalReportUniversityof‘‘LucianBlaga’’,Sibiu,Romania,February1999.

[30]L.Vintan,M.Iridon,TowardsaHighPerformanceNeural

BranchPredictor,InternationalJointConferenceonNeu-ralNetworks,Washington,USA,July1999.

[31]L.Vintan,TowardsaPowerfulDynamicBranchPredictor,

RomanianJournalofInformationScienceandTechnol-ogy,Bucharest,Romania,2000.

[32]G.Steven,C.Egan,R.Anguera,L.Vintan,Dynamic

BranchPredictionusingNeuralNetworks,ProceedingsofInternationalEuromicroConferenceDSDÕ2001,Warsaw,Poland,September,2001,pp.178–185,ISBN0-7695-1239-9.

[33]T.Yeh,Y.N.Patt,Two-LevelsAdaptiveTrainingBranch

Prediction,Micro-24,Albuquerque,NewMexico,Novem-ber1991,pp.51–61.

[34]T.Yeh,Y.Patt,AlternativeImplementationsofTwo-LevelAdaptiveBranchPrediction,ISCA-19,GoldCoast,Australia,May1992,pp.124–134.

[35]T.Yeh,Y.Patt,AComprehensiveInstructionFetch

MechanismforaProcessorSupportingSpeculativeExe-cution,Micro-25,Portland,Oregon,December1992,pp.129–139.

[36]T.Yeh,Y.N.Patt,AComparisonofDynamicBranch

PredictorsthatUseTwoLevelsofBranchHistory,ISCA-20,SanDiego,May1993,pp.257–266.

570C.Eganetal./JournalofSystemsArchitecture49(2003)557–570

ColinComputerEganing)Architecturein1996SciencereceivedhisBScdegreeinandhis(SystemsPhDinEngineer-BranchmancePredictionforhisworkinComputerDynamicfromHetheSuperscalarProcessorsinHighinPerfor-2000,DepartmentiscurrentlyUniversityaSeniorofLecturerHertfordshire.theingUniversityofComputerScienceintheatonawideofrangeHertfordshireofteach-courses.PerformanceHehasactiveresearchSystemsandComputerArchitecture

Computertion.

ProcessorDesignandinterestsDynamicintheBranchfieldsofPredic-HighGordonPrincetonStevengraduatedfrom(HighElectronics.Honors,University,PhiUSAwithaBSEElectronicsHereceivedBetahisKappa)MSEininton,neeringandhisinPhD1967,alsofromPrince-forversityhisworkfromManchesterinComputerUniversityEngi-inGordononStevenMU5.Followingspent10uni-puterindustrysystems.developingAtPlesseyvariousyearscom-pabilitiesthebasedmultiprocessoronthedevelopmentsystem,atCTLoftheheheworkedPP250workedca-onmicroprocessorModula-1minicomputerandatHSDEheworkedonjoinedbasedcontrolsystems.In1979,GordonStevenearlyHertfordshiretheComputerScienceDepartmentoftheUniversityofuntilhisretirementwhereinheSeptemberwasaReader2002.

inComputerArchitecturePatrickUniversityQuickgraduatedfromaMathematicsB.A.inMathematics.ofCambridgeAfterin1973teachingwiththeenceandthenComputerSci-Post-GraduateforsomeScienceDiplomayears,heinobtainedaversityin1987,againfromComputertheUni-hasputerbeenofaCambridge.SeniorLecturerSince1987heHertfordshireScienceattheUniversityinCom-ofofComputerteachingSystemsonandawideProgram-rangeinterestsandDynamicinthemingcourses.Hehasactiveresearch

BranchfieldsofPrediction.

HighPerformanceProcessorDesignRub󰀁puterenAngueracompletedhisundergraduatestudiesinCom-finalformyearScienceattheUniversityofHertfordshirein2000.HisperscalarDynamicprojectwasentitledÔUsingNeuralNetworkstoPer-beenusedArchitectureBranchinthispaper.

Õ.PredictionSomeoftheinresultsaHighofPerformancehisprojecthaveSu-FleurUniversityStevenZoology.ingraduated1985withfromatheHullinComputer1986andSheherreceivedPhDforbothherherB.Sc.workM.Sc.inintheArchitecturein19,fromwasUniversityofHertfordshire.Shetheapost-doctoralresearchfellowatAprilUniversity2001.

ofHertfordshireuntilLucian(ComputerVintanobtainedanMSE(ComputerScience)UniversityScience)inin19871997andfromaPhDara,ComputerRomania.‘‘Politechnica’’HeisaProfessorofTimiso-theoftheScienceandEngineeringatSibiu,University‘‘LucianBlaga’’ofactivePerformanceresearcherSibiu,Romania.inthefieldsLucianofisHighanDynamicProcessorDesignandArchitecturecollaboratedBranchwithPredictionandhassince1996.

ResearchGroupattheUniversitytheofHertfordshireComputer

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- huatuo0.cn 版权所有 湘ICP备2023017654号-2

违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务