separaon)plaorm)for)integrang) complex)avionics)(spica)) › sites › default › files ›...

34
NASA Aeronautics Research Institute Separa&on Pla,orm for Integra&ng Complex Avionics (SPICA) NASA Aeronau&cs Research Mission Directorate (ARMD) FY12 LEARN Phase I Technical Seminar November 13–15, 2013

Upload: others

Post on 28-Jan-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

  • NASA Aeronautics Research Institute

    Separa&on  Pla,orm  for  Integra&ng  Complex  Avionics  (SPICA)    

    NASA  Aeronau&cs  Research  Mission  Directorate  (ARMD)  FY12  LEARN  Phase  I  Technical  Seminar    

    November  13–15,  2013  

  • NASA Aeronautics Research Institute

    SPICA  

    November  13–15,  2013     2  

    Architecture Integration

    Aircraft HWArchitecture

    Navigation

    Route Planning

    Communications

    Scheduler

    Mem Mgr

    Thread Mgr

    Msg Router

    SMT, other solvers

    AADL Models

    Allocation

    Scheduling

    SystemArchitectureSpecification

    Configured SafeSecure RTOS

    Navigation

    Commun

    ications

    RoutePlanning

    Separation Platformfor Integrating

    Complex Avionics

    Aircraft HWArchitecture

    Hosted Control Applications Domain Specific Components

    Sel

    ect

    System Safety RequirementsSystem Security Requirements Rate, Jitter, Duration

    RTOS modelMutual ExclusionPerformanceIntegrity, AvailabilitySynchronization

    Integrated Vehicle

    Requirements

    Multi-core, multi-processingDistributed sensing, actuationPhysical separationBandwidthHierarchical Memories

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Presenta&on  Outline  

    •  The  problem  •  Technical  approach  •  Results  of  Phase  I  •  Impact  •  Next  steps  

    November  13–15,  2013     3  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    787  Common  Data  Network  

    November  13–15,  2013     4  

    Diagram  showing  where  the  common  core  system  (CCS)  is  connected  throughout  the  787  aircraV.  Most  of  what  is  noted  in  the  fuselage  are  the  21  or  so  remote  data  concentrators  that  GE  provides  and  are  advanced  sensors  to  the  CCS.  Source:  GE  Avia&on.  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    November  13–15,  2013     5  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    A380  IMA  

    November  13–15,  2013     6  

    Source: http://www.artist-embedded.org/docs/Events/2007/IMA/Slides/ARTIST2_IMA_Itier.pdf

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Avionics  Hardware  Example  

    CPM: Core Processing Module IOM: Input / Output Module LRM: Line Replaceable Module

    SAFEbus® Design: Honeywell International, Brendan Hall et al, "Ringing Out Fault Tolerance." (DSN'05) pp 298-307.

             IOM1  

             IOM2  

             CPM

    1            CPM

    2            CPM

    3            CPM

    4          IO

    M3  

           IO

    M4  

    SAFEbus®  

    ARINC  429  

    ARINC  629  

    ARINC  429  

    ARINC  629  

    Airplane Information Management System (AIMS) Cabinet

    HOST – Input / Output Module (IOM)

    429   629  

    HOST Core Processing Module (CPM)

    BIU: Bus Interface Unit ARINC 629: communications ARINC 429: communications

    November  13–15,  2013     7  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Technical  approach  

    •  Modeling  the  en&re  aircraV  avionics  in  SAE’s  AADL  open-‐source  standard  •  Formal  model  of  relevant  constraints  •  Using  a  Sa+sfiability  Modulo  Theories  (SMT)  solver,  extrac&ng  informa&on  

    directly  from  the  AADL  model.  

    November  13–15,  2013     8  

    Innova&ons:  –  Complete,  consistent  set  of  constraints  defining  a  correct  schedule  

    (not  an  algorithm  or  a  priority  scheme)  suppor&ng  a  wide  range  of  architectures  and  protocols  

    –  Using  a  generic  solving  engine  (yices  SMT  solver)  to  generate  schedules  

    –  Solving  mul&ple  levels  of  a  hierarchical  scheduling  problem,  all  at  the  same  &me  and  in  the  same  model  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Phase  I  Output  

    As  a  result  of  this  project,  we  have  generated:  –  a  formal  specifica&on  of  the  complete  set  of  constraints,  sufficient  to  

    represent  a  wide  range  of  different  avionics  architectures  and  problems,  

    –  a  large  set  of  test  problems  for  input  to  the  yices  SMT  solver,  demonstra&ng  the  use  of  those  constraints,  along  with  output  results  and  performance  data,  

    –  a  tunable  test  problem  generator,  automa&cally  genera&ng  problem  instances  in  yices  input  format,  and  

    –  an  exemplar  aircraV  avionics  architecture,  rendered  in  both  diagrams  and  AADL.  

    These  ar&facts  are  available  under  SBIR  data  rights  for  government  use.    In  addi&on,  we  intend  to  turn  the  forthcoming  Phase  I  final  report  into  one  or  more  technical  papers  for  conference  submission.  

    November  13–15,  2013     9  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Modeling  the  en&re  aircraV  avionics  

    •  Hierarchical  organiza&on  •  Asynchronous  boundaries  •  ARINC  429,  653,  659,  664  •  Varying  latencies,  rates,  cri&cality  •  Shared  memory,  buffers,  and  buses  •  …  

    November  13–15,  2013     10  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    e500.X03c  

    e500core  

    e500core  

    e500core  

    e500core  

    PCI

    L3  

    Hardware  Parts  

    11  

    AFDX  VME   Ethernet  PCI  

    intel   i5   i5core  

    RAM   ROM   MASS  

    Arm   arm7   arm7core  

    arm7.X03c  

    arm7core  

    arm7core  

    arm7core  

    arm7core  

    PCI

    L2  Freescale   e500mc   e500core  

    L[123]  

    AFDX_FO  

    TI   TI320C  

    November  13–15,  2013     NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Modules  

    November  13–15,  2013     12  

    Cabinet  Switch  

    arm7.X03c  

    ROM  RAM  

    PCI  

    AFDX Core  Processing  Module  

    e500.X03c  

    ROM  RAM  

    AFDX

    PCI  

    Network  Interface  (FOX)  

    AFDX arm7.X03c  

    ROM  RAM  

    PCI  AFDX_FO

    ACS   FOX  

    Remote  Data  Concentrator  

    AFDX_FO

    FOX  

    &320.X03c  

    ROM  RAM  

    I2C  (?)  

    429

    A

    D CAN (asynch boundary)

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Core  Processing  Module  Detail  

    November  13–15,  2013     13  

    Core  Processing  Module  

    e500.X03c  

    ROM  RAM  

    AFDX

    PCI  

    Core  Processing  Module  

    ROM  

    RAM  

    e500.X03c  PCI  

    Bridge  AFDX_FO

    MemBus  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    2

    Core  Computer  

    November  13–15,  2013     14  

    Core  Computer  

    AFDX_FO

    Core  Processing  Module  

    e500.X03c  

    ROM  RAM  

    AFDX

    PCI  

    Core  Processing  Module  

    e500.X03c  

    ROM  RAM  

    AFDX

    PCI  

    8

    AFDX  

    Network  Interface  (FOX)  

    AFDX arm7.X03c  

    ROM  RAM  

    PCI  AFDX_FO

    2

    Cabinet  Switch  

    arm7.X03c  

    ROM  RAM  

    PCI  

    AFDX 2

    Core  Computer  

    AFDX_FO CPMs  (8)  

    Switch  

    FOX  (2)  AFDX  

    (symbol)

    (Implementation)

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Adven&um  “X-‐03c”  

    •  Next  Gen  Commercial  transport  •  Twin  engine  •  Long  haul  •  Narrow  body  •  Mixed  passenger  /  cargo  

    November  13–15,  2013     15  

    0        3  X

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    X-‐03c  Hardware  Architecture  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     16  

    AFDX

    Common Cabinets

    Data Concentrator

    Data Concentrator

    Sensors  

    Actuators  

    Sensors  

    Actuators  

    DA/AD  

    DA/AD  

  • NASA Aeronautics Research Institute

    X-‐03c  Hardware  Architecture  (AADL)  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     17  

    Core  Computer  

    AFDX_FO CPMs  (8)  

    Switch  

    FOX  (2)  AFDX  

    Core  Computer  

    AFDX_FO CPMs  (8)  

    Switch  

    FOX  (2)  AFDX  

    RDC  

    AFDX_FO 429

    A

    D CAN Sensors  

    Actuators  

    Sensors  

    Actuators  

    RDC  

    AFDX_FO 429

    A

    D CAN

    RDC  

    AFDX_FO 429

    A

    D CAN

    Forward EE bay

    Aft EE bay

  • NASA Aeronautics Research Institute

    Formal  model  of  relevant  constraints  

    Latency  Jirer  Preemp&on  Over/Undersampling  Asynchronous  boundaries  Task  grouping/varying  context-‐switch  &mes  Inter-‐  and  intra-‐frame  &ming  constraints  Shared  memory  Resource  assignment  …  

    November  13–15,  2013     18  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

    For  the  full  set  and  formal  defini&ons,  see  the  Final  Report.  

  • NASA Aeronautics Research Institute

    Task  Grouping  and  Preemp&on  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     19  

  • NASA Aeronautics Research Institute

    Communica&on  Latency  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     20  

    Latency: Preemption

    • Producer and consumers can be interrupted orpreempted• In that case, Latency is still measured from theproducer begin event to consumer end event

    2013 © 2009 2013 Adventium Proprietary 13

    Time

    Producer A

    Data transfer ABConsumer B

    Latency AB Latency: Over Sampling

    Oversampling can be used to reduce latencyWithin a synchronous temporal domain, latencyis measured to the first consumer instance

    2013 © 2009 2013 Adventium Proprietary 14

    Time

    Producer A (5 Hz)

    Consumer B (10 Hz)

    Latency AB

  • NASA Aeronautics Research Institute

    Latencies  Across  Asynch.  Boundary  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     21  

    Latency Jitter: Asynchronous Boundaries

    The minimum latency jitter guarantee across anasynchronous boundary equals Jitter + period ofthe slowest rate of the producer/consumer

    2013 © 2009 2013 Adventium Proprietary 19

    Time

    Task A

    Task B

    10 ms 160 ms

    Frames in phase sync(large latency)

    Frames out of phase sync(min latency)

  • NASA Aeronautics Research Institute

    Using  a  SMT  solver  

    •  Effec&ve  combina&on  of  logical  and  mathema&cal  reasoning  •  Based  on  technologies  demonstrated  to  scale  to  millions  of  variables  

    and  constraints  

    November  13–15,  2013     22  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

       ;;;  Jobs  on  a  given  cpu  may  not  overlap  (assert  (or  (/=  pJ1  pJ2)  (

  • NASA Aeronautics Research Institute

    Undersampling  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     23  

    0 1000 2000 3000 4000

    12

    /home/redman/Adventium/spica/Design/experiments/smt/test−cases/test8.mat

    time

    resource

    t4(1)t1(1) t1(2) t1(3) t1(4)

    t2(1) t2(2) t2(3) t2(4)

    t3(1) t3(2)

    partition123

  • NASA Aeronautics Research Institute

    Dataflow  and  Resource  Assignment  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     24  

    0 1000 2000 3000 4000

    12

    3

    /home/redman/Adventium/spica/Design/experiments/smt/test−cases/test14.mat

    time

    resource

    t1(1) t2(1) t1(2) t2(2) t1(3) t2(3) t1(4),t6(1) t2(4)

    t3(1) t4(1) t3(2) t4(2) t3(3) t4(3) t3(4) t4(4)

    t5(1) t5(2) t5(3) t5(4)

    partition123

  • NASA Aeronautics Research Institute

    Mul&ple  Buses  and  Asynch.  Boundary  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     25  

    0 500 1000 1500 2000 2500 3000

    12

    34

    56

    78

    910

    1112

    13

    /home/redman/Adventium/spica/Design/experiments/smt/test−cases/test15.mat

    time

    resource

    t01(1) t01(2) t01(3),t16(1)

    t02(1) t02(2) t02(3)

    t03(1) t03(2) t03(3)

    t04(1) t04(2) t04(3) .. .

    t05(1) t05(2) t05(3)

    t06(1) t06(2) t06(3)

    t07(1) t07(2) t07(3)

    t08(1) t08(2) t08(3)

    t09(1) t09(2) t09(3)

    t10(1) t10(2) t10(3)

    t11(1) t11(2) t11(3)

    t12(1) t12(2) t12(3) .

    t13(1) t13(2) t13(3)

    t14(1) t14(2) t14(3)

    t15(1) t15(2) t15(3)

    partition12345

    678910

    1112131415

  • NASA Aeronautics Research Institute

    Scaling  For  Different  Problems  

    November  13–15,  2013     26  

    ● ● ● ● ● ● ●●

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    ● ● ● ● ● ● ● ●

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    ● ● ● ● ● ● ● ●

    Test cases scaling

    Multiplier

    Tim

    e to

    sch

    edul

    e (s

    ec)

    1 2 3 4 5 6 7 8

    010

    030

    050

    070

    090

    011

    00●

    test1test2test4test5test6test7test8test9test10test11test14test15test16

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Scaling:  Processor  Load  vs.  Time  

    November  13–15,  2013     27  

    ● ● ● ● ● ● ●● ● ●

    ●● ●

    050

    010

    0015

    00

    Load versus Time

    Load

    Aver

    age

    time

    to s

    ched

    ule

    (sec

    )● 0 flows, UNsat inc.

    0 flows, SAT only5 flows, UNsat inc5 flows, SAT only

    0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 0.85 0.95

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Impact:    NASA  Programs  

    •  Integrated  Systems  Research  -‐-‐  inves&ga&ng  the  avionics-‐level  integra&on  of  novel  func&ons,  hardware  systems,  and  architectures.    

    •  Avia&on  Safety  -‐-‐  This  program  includes  assurance  for  flight-‐cri&cal  systems,  including  managing  the  complexity  of  architec&ng,  valida&ng,  and  verifying  the  correct  func&oning  of  increasingly  complex  avionics.  SPICA’s  output  is  a  concrete  schedule  which  can  easily  be  verified  to  sa&sfy  requirements  governing  execu&on  &mes,  latencies,  and  sampling  rates,  as  well  as  more  complex  issues  such  as  metastable  communica&ons  across  an  asynchronous  boundary.    

    •  Orion  -‐-‐  SPICA  is  developing  relevant  capabili&es  for  other  complex,  networked  vehicular  systems.  For  example  NASA’s  Orion  MPCV  uses  several  of  the  protocols  and  standards  SPICA  is  designed  to  address.  

    November  13–15,  2013     28  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Impact:    Outside  of  NASA  

    1.  Adven&um  is  part  of  the  System  Architecture  Virtual  Integra&on  (SAVI)  consor&um  as  a  tool  vendor  partner.  SAVI  is  an  Aerospace  Vehicle  Systems  Ins&tute  (AVSI)  program,  with  membership  from  industry,  government,  and  academia.    

    2.  The  Phase  I  proposal  included  lerers  of  support  from  Lockheed  Mar&n  and  the  Army  Avia&on  and  Missile  Research  Development  and  Engineering  Center  (AMRDEC).  

    3.  Adven&um  has  a  current  contract  suppor&ng  the  Army  in  the  development  of  an  Architecture-‐Centric  Virtual  Integra&on  Process  for  Future  Ver&cal  LiV  mission  systems.  

    November  13–15,  2013     29  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Next  steps  

    Technical  issues  •  Mul&-‐core,  more  generally,  e.g.,    conten&on  for  on-‐board  cache  •  Integra&ng  mul&ple  scheduling  approaches  •  Other  protocols,  as  needed  

    Matura&on  •  Scaling  •  Finalize  transla&on  from  AADL  to  SMT  input  format  •  Different  avionics  architectures  

    November  13–15,  2013     30  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Mul&-‐core  

    •  Characteris&cs  addressed  in  Phase  I  –  Shared  IO  –  Shared  buffers  –  Task  alloca&on  to  discrete  processing  resources  –  (A)synchronous  communica&ng  processes  

    •  Issues  deferred  to  Phase  II  –  Memory  conten&on  from  different  cores,  including  various  levels  of  

    cache  –  Migra&on  between  cores  –  (virtualiza&on)  

    November  13–15,  2013     31  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    Integra&ng  Different  Schedulers  

    •  SPICA  produces  a  sta+c  schedule  that  is  mathema&cally  guaranteed  to  sa&sfy  the  input  constraints.  

    •  Dynamic  schedulers  are  specified  in  terms  of  a  set  of  schedulability  constraints.  

    •  Current  integra&ons  provide  a  sta&c  alloca&on  within  which  the  dynamic  scheduler(s)  have  control.  

    •  SMT  is  specifically  designed  to  incorporate  specialized  types  of  constraints  

    So:    Is  it  possible  to  specify  schedulability  constraints  in  a  decomposable  form,  such  that  the  resul&ng  alloca&on  may  take  several  forms,  but  is  in  any  case  guaranteed  to  be  schedulable?    For  example,  is  one  alloca&on  more  efficient  than  another  at  accommoda&ng  sporadic,  high-‐priority,  low-‐latency  tasks  and  s&ll  providing  the  required  guarantees  for  other  tasks?  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    November  13–15,  2013     32  

  • NASA Aeronautics Research Institute

    Scaling  

    •  Current  system  solves  problems  involving  dozens  to  hundreds  of  constraints,  in  minutes.  

    •  SMT  technology  has  demonstrated  performance  on  millions  of  constraints  •  Growth  in  solving  &me  with  problem  size  is  reasonable.  

    Next  steps:  •  Search  control  •  Problem  reformula&on  

    November  13–15,  2013     33  

    In  previous  work,  we  have  demonstrated  several  orders  of  magnitude  increase  in  problem  size  and  decrease  in  solving  &me.  

    NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar    

  • NASA Aeronautics Research Institute

    The  End  

    November  13–15,  2013     34  NASA  Aeronau&cs  Research  Mission  Directorate  FY12  LEARN  Phase  I  Technical  Seminar