bi interview questions ver1.0

81
1. What are the main changes b/w 3.5 and 7.0 Ans : - 1. In Infosets now you can include Infocubes as well. 2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube. 3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM. 4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal ! :) 5. Search functionality has improved!! You can search any object. Not like 3.5 6. Transformations are in and routines are passe! Yes, you can always revert to the old transactions too. 7. The Data Warehousing Workbench replaces the Administrator Workbench. 8. Functional enhancements have been made for the DataStore object: New type of DataStore object Enhanced settings for performance optimization of DataStore objects. 9. The transformation replaces the transfer and update rules. 10. New authorization objects have been added 11. Remodeling of InfoProviders supports you in Information Lifecycle Management. 12 The Data Source: There is a new object concept for the Data Source. Options for direct access to data have been enhanced. From BI, remote activation of Data Sources is possible in SAP source systems.

Upload: siddhu-gottipati

Post on 03-Jan-2016

183 views

Category:

Documents


0 download

DESCRIPTION

interview prep

TRANSCRIPT

Page 1: BI Interview Questions Ver1.0

1. What are the main changes b/w 3.5 and 7.0Ans : -

1. In Infosets now you can include Infocubes as well.

2. The Remodeling transaction helps you add new key figure and characteristics and handles historical data as well without much hassle. This is only for info cube.

3. The BI accelerator (for now only for infocubes) helps in reducing query run time by almost a factor of 10 - 100. This BI accl is a separate box and would cost more. Vendors for these would be HP or IBM.

4. The monitoring has been imprvoed with a new portal based cockpit. Which means you would need to have an EP guy in ur project for implementing the portal ! :)

5. Search functionality has improved!! You can search any object. Not like 3.5

6. Transformations are in and routines are passe! Yes, you can always revert to the old transactions too.

7. The Data Warehousing Workbench replaces the Administrator Workbench.

8. Functional enhancements have been made for the DataStore object:New type of DataStore objectEnhanced settings for performance optimization of DataStore objects.

9. The transformation replaces the transfer and update rules.

10. New authorization objects have been added

11. Remodeling of InfoProviders supports you in InformationLifecycle Management.

12 The Data Source:There is a new object concept for the Data Source.Options for direct access to data have been enhanced.From BI, remote activation of Data Sources is possible in SAP source systems.

13. There are functional changes to the Persistent Staging Area (PSA).

14.BI supports real-time data acquisition.

15 SAP BW is now known formally as BI (part of NetWeaver 2004s). It implements the Enterprise Data Warehousing (EDW). The new features/ 16. Renamed ODS as DataStore.17 Inclusion of Write-optmized DataStore which does not have any

Page 2: BI Interview Questions Ver1.0

change log and the requests do need any activation18 Introduction of "end routine" and "Expert Routine"19 Push of XML data into BI system (into PSA) without Service API or Delta Queue20 Load through PSA has become a mandatory. You can't skip this, and also there is no IDoc transfer method in BI 7.0. DTP (Data Transfer Process) replaced the Transfer and Update rules. Also in the Transformation now we can do "Start Routine, Expert Routine and End Routine". during data load.

21 New data flow capabilities such as Data Transfer Process (DTP), Real time data Acquisition (RDA).

22 Enhanced and Graphical transformation capabilities such as Drag and Relate options.

23 User management (includes new concept for analysis authorizations) for more flexible

2. What are the role & responsibilities when you are in implementation and while in support also

Responsibilities of an implementation project... For ex, Lets say If its a fresh implementation of BI or for that matter you are implementing SAP...

First and foremost will be your requirements gathering from the client. Depending upon the requirements you will creat a business blueprint of the project which is the entire process from the start to the end of an implementation...

After the blue print phase sign off we start off with the realization phase where the actual development happens... In our example after installing the necessary softwares, patches for BI we need to discuss with the end users who are going to use the system for inputs like how they want a report to look like and what are the Key Performance Indicators(KPI) for the reports etc., basically its a question and answer session with the business users... After collecting those informations the development happens in the development servers...

After the development comes to an end the same objects are tested in quality servers for any bugs, errors etc., When all the tests are done we move all the objects to the production environment and test it again whether everything works fine...

The Go-Live of the project happens where the actually postings happen from the users and reports are generated based on those inputs which will be available as an analytical report for the management to take decisions...

Page 3: BI Interview Questions Ver1.0

The responsibilites vary depending on the requirement... Initially the business analyst will interact with the end users/managers etc., then on the requirements the software consultants do the development, testers do the testing and finally the go-live happens...

What are the objects that we peform in a production Support project?

In production Suport Generally most of the project they will work on monitoring area for their loads(R3/ NON SAP to Data Taggets (BW)) and depending up the project to project it varies because some of them using the PC's and Some of them using Event Chains. So its Depends up on the Project to project varies.

What are the different transactions that we use frequently in Production support project? Plz explain them in detial..

Generally In Production Support Project , we will use the check the loads by using RSMO for Monitoring the loads and we will rectify the errors in that by using step by step analysis.

The consultant is required to have access to the following transactions in R3.

1. ST22 2. SM37 – jobs execution3. SM58 – execute luw’s, trfc error4. SM51 - It basically checks whether servers are logged on to the msg server.5. RSA7 – delta queue6. SM13 – update tables

Rsrv -

Authorizations for the following transactions are required in BW

1. RSA1 -

3. ST22 - You can use the tools of the ABAP dump analysis (ST22) to list the ABAP runtime errors that have occurred in an ABAP system

5. SE38 – to display programs and write programs6. SE37 – function module7. SM12 - locks8. RSKC maintanence of extra characters permitted

10. RSRV - repairs

Page 4: BI Interview Questions Ver1.0

The Process Chain Maintenance (transaction RSPC) is used to define, change and view process chains.

Upload Monitor (transaction RSMO or RSRQ (if the request is known)

The Workload Monitor (transaction ST03) shows important overall key performance indicators (KPIs) for the system performance

The database monitor (transaction ST04) checks important performance indicators in the database, such as database size, database buffer quality and database indices.

The SQL trace (transaction ST05) records all activities on the database and enables you to check long runtimes on a DB table or several similar accesses to the same data.

The OS Monitor (transaction ST06) gives you an overview on the current CPU, memory, I/O and network load on an application server instance.

The ABAP runtime analysis (transaction SE30)

The Cache Monitor (accessible with transaction RSRCACHE or from RSRT) shows among other things the cache size and the currently cached queries. The Export/Import Shared buffer determines the cache size; it should be at least 40MB.

3. What is the maximum no. of key figures?

233

4. What is the maximum no. of characteristics?

248

5. I have 0material as characteristic, for this omat_group as created as display attribute. After words I changed to navigational attribute. What is the impact on star schema?

Ans:- Additional X table will be created for mat group. X table is time independent Sid table of navigational attribute.

Page 5: BI Interview Questions Ver1.0

6. How would you optimize the dimensions?Ans:-

We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.

1. What are the DSO and DTP Types in BI?Standard DSO

Data population is done via the Standard DTPSID are generated for standard DSO Data Records with same key are aggregates during activation process for standard DSO.Needs Request activation process

It consists of three tables.(new, change and active)......activation queue,active table, and change log table.....It is completly integrated in the staging process.....Using change log table means that all changes are also written and are available as delta uploads for connected data targets

Write optimzed DSO Data population is done via Standard DTP No SID are generated for Write optimized DSO No Need to activate Request

Direct update DSOData population is done via Direct DTP via API'sNo SID are generated for direct DSO No Need to activate Request

It consists of active data table only. This means it is not easily integrated in the staging process...SIDs are not generated ......This DSO type is filled using API and can be read via BAPI......It is generally use for external application and APD...

Different types of DTP'sStandard DTPDirect Access (Direct update DSO) DTPError stock DTP

DTP for Real-Time Data Acquisition

2. What are DSO changes between 3.5 and 7.0?

Ans: In BI 7.0, ODS was renamed as DSO (data store objects). With this new object, three types are available. "Standard", which is similar to the ODS, has

Page 6: BI Interview Questions Ver1.0

three tables for activation queue, table for active data and change log table, direct update which consists of active data table only, and write optimized which also has only active data table. did doesn't get generated in the second and third type of DSOs

Standard DSO provides you delta images, it has overwrite fucntionality as well.Write optimizes DSOs are used for efficient and targeted for warehouse layer of the architecture. Direct DSO is old concept (3.x), can be loaded data onlywith APIs

3. When we use write optimized DSO and when we use Direct DSo?

Write optimized DSO used to pull large volume of data /direct DSO used in APD purpose.

a. Used where fast loads are essential. Example: multiple loads per day (or) short source system access times (world wide system landscapes).

If the Data Source is not delta enabled. In this case, you would want to have a Write-Optimized DataStore to be the first stage in BI and then pull the Delta request to a cube.

Page 7: BI Interview Questions Ver1.0

Write-optimized DataStore object is used as a temporary storage area for large sets of data when executing complex transformations for this data before it is written to the DataStore object. Subsequently, the data can be updated to further InfoProviders. You only have to create the complex transformations once for all incoming data.

b. Write-optimized DataStore objects can be the staging layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders.

c. If you want to retain history at request level. In this case you may not need to have PSA archive; instead you can use Write-Optimized DataStore.

d. If a multi dimensional analysis is not required and you want to have operational reports, you might want to use Write Optimized DataStore first, and then feed data into Standard Datastore.

e. Probably you can use it for preliminary landing space for your incoming data from diffrent sources.

f. If you want to report daily refresh data with out activation.In this case it can be used in reporting layer with InfoSet (or) MultiProvider. 

Functionality of Write-Optimized DataStore

Only active data table (DSO key: request ID, Packet No, and Record No):

        o   No change log table and no activation queue.

        o   Size of the DataStore is maintainable.

        o   Technical key is unique.

        o   Every record has a new technical key, only inserts.

        o   Data is stored at request level like PSA table. 

No SID generation:

        o   Reporting is possible(but you need make sure performance is optimized )

        o   BEx Reporting is switched off.

Page 8: BI Interview Questions Ver1.0

        o   Can be included in InfoSet or Multiprovider. 

        o   Performence improvement during dataload.

Fully integrated in data flow:

        o   Used as data source and data target

        o   Export into info providers via request delta 

Uniqueness of Data:

        o   Checkbox “Do not check Uniqueness of data”.

        o   If this indicator is set, the active table of the DataStore object could contain several records with the same key. 

Allows parallel load.

Can be included in Process chain with out activation step.

Support Archive.

10 How do you decide whether DSO should be taken place in standard BI flow?

Ans:-Depending on the data source delta mechanism. We will check in Roosource table. And RodeltaM table. Depending on the image we will decide.

11.I have stand DSO as my final data target. I want to generate reports on that DSO. To generate reports we check SID generation option. But if I check this option I am going to lose load performance. How do you model this situation?

Ans:- I will uncheck SID generation option and I will load. After that I to generate report on this DSO I will create infosets by using this DSO. So both requiments will be satisfied.

12. have NON-SAP and SAP source systems. If you are going to install the objects from business content what is the prerequisite?

Ans:- We need to select concern source system; by default all source systems will be checked. But we need to uncheck unnecessary source systems.

13.Diff. between multiproviders and infosets?Ans : -

Page 9: BI Interview Questions Ver1.0

A Multiprovider can be built on basic InfoCubes, ODSs, info objects or Infosets. An Infoset can have only ODSs or info objects. Multiproviders use 'union' operation but Infosets use 'join'. Both the objects are logical definitions which don't store any data.

14.Difference among start routines, end routines, expert routines?Ans:-

Start Routine, This will trigger before Transformations. Generally used for Filtering records.Endroutine, this will trigger before Transformations. Generally used for updating data based on existing data.Expert Routine, this will trigger without any transformation Rule. Whenever we try to write a expert routine, all existing rules are deleted. This is used generally for customizing rules.

15. What is the impact on existing routines if we create expert routine?

Automatically existing routines will be deleted or deactive.

We have following types of routins in BI 7. Start Routine:The start routine is run for each data package at the start of the transformation. The start routine has a table in the format of the source structure as input and output parameters. It is used to perform preliminary calculations and store these in a global data structure or in a table. This structure or table can be accessed from other routines. You can modify or delete data in the data package.

Routine for Key Figures or Characteristics:This routine is available as a rule type; you can define the routine as a transformation rule for a key figure or a characteristic. The input and output values depend on the selected field in the transformation rule. More information: the Routine section under End Routine:An end routine is a routine with a table in the target structure format as input and output parameters. You can use an end routine to postprocess data after transformation on a package-by-package basis. For example, you can delete records that are not to be updated, or perform data checks.

Expert Routine:This type of routine is only intended for use in special cases. You can use the expert routine if there are not sufficient functions to perform a transformation. The expert routine should be used as an interim solution until the necessary functions are available in the standard routine.

Page 10: BI Interview Questions Ver1.0

Try these tables: RSAABAP ABAP code in routines uses Code id RSAABAPINV inverse routines RSLDPRULESH Coding done in schedulers RSLDPRULE

16. What are Start routines, Transfer routines and Update routines?Ans:-

• Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.• Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.

17.If we write transfer routine at characteristic level, how it will behave in transformations?

Ans:-

When you create a transfer routine, it is valid globally for the characteristic and is included in all the transformation rules that contain the InfoObject. However, the transfer routine is only run in one transformation with a DataSource as a source. The transfer routine is used to correct data before it is updated in the characteristic.During data transfer, the logic stored in the individual transformation rule is executed first. Then the transfer routine for the value of the corresponding field is executed for each InfoObject that has a transfer routine.

In this way, the transfer routine can store InfoObject-dependent coding that only needs to be maintained once, but that is valid for all transformation rules.

18.What are the rule types in transformations?Ans:- Constant Direct Assignment Formula Read Master data No Transformation Routine

19.What are the error handling mechanisms are available?

Page 11: BI Interview Questions Ver1.0

Error stock DTP

Incase of 7.0 Version 1) DTP: In DTP, you find the Update tab , where you will find the different error handling options.

a. NO Update, No Reporting.b. Update Valid Records only, Reporting not possible (Request red).c. Update Valid Records only, Reporting possible (Request green).

Incase of 3.x version

2) In info package : You find the update tab , under Data update types in the data targets, you find the error handling push button , when you click this you will find all the options related error handling

20.How do you enable error stack?

Ans:-DTP->Update Tab-->Error handling-->Valid Records Update, No reporting

(Request Red);

21. What is the Difference between DTP and IP in BI 7.0?Ans:-

The key difference between 3.x and 7.x is that in 3.x an infopackage would load from a single DS to multiple data targets (infocubes, ODS etc) so if the source was sending delta, then the load to targets has to be done as a single data load to all.,meaning a delta has to get loaded to all targets at the same time...you could not load them at different timings.

Whereas in 7.x, DTP helps in maintaining delta queues from PSA to different targets...enabling you to load the delta to each target, independent of the other because each target will have its own delta Queue maintained. This is a big change in 7.x and helps in load distribution mechanisms.22.What is the importance of semantic groups in DTP?

Ans:-Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package. This setting is only relevant for Data Store objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.

Page 12: BI Interview Questions Ver1.0

A very simple example:

Lets say there are two records in the input stream of data for a DTP.

Product Material ........

P1 M1 XYZ....P1 M1 PQR....

If the data gets divided into multiple packets while getting processed, the above two records might go into separate data packets. In case you have defined semantic group in DTP with product and material, system will always put these two records together. This is sometimes required when it is needed to process all such records together, for eg., in start routine.

23.I have installed 0plant from business content initially its length is 15 chars, later I changed from 15 to 20. After that I have another requirement according to that I need to install 0plant from business content. If I am installing seco9nd time is there any impact on existing 0plant, if yes how do we overcome?

Ans: By selecting Match X copy option

Match(X) means when you are installing a business content and that particular object is already existing in you system then the tick mark can been seen against it. On installing the same system takes care that object is not being overwritten in which case you enhancement if any will not be lost.

Example of match

Additional customer-specific attributes have been added to an InfoObject in the A version. In the D version, two additional attributes have been delivered by SAP that do not contain the customer-specific attributes. In order to be able to use the additional attributes, the delivery version has to be installed from Business Content again. At the same time, the customer-specific attributes are to be retained. In this case, you have to set the indicator (X) in the checkbox. After installing the Business Content, the additional attributes are available and the customer-specific enhancements have been retained automatically. However, if you have not checked the match field, the customer-specific enhancements in the A version are lost.

For example you installed the 0material info object and added some Z attrbites to it. Again you are installing the Infocube

0IC_C03 it contains the 0material. At the time of Infocube installation if you don't select the match X copy it will simply

Page 13: BI Interview Questions Ver1.0

overwrite the 0material. You will lose the Z attrbutes.

If you select the match X copy it will merge the properties. You don't lose the Z attrbites.

Eg: Actual length 20

0material installed and changed length to 40.

Again your are installing the 0IC_C03 if don't select the match X copy it will overwrite the 0material and make length to 20.

If you select the match X copy it will merge the properties.

24.I have 10 aggregates on one particular infocube. How do you identify which are most useful?

Ans: by using Valuation

25.What is compounding. If I have 0plant in US market and 0plant in uk market. Material is coming from two plants and stored in 0plant which is compound chrematistic. How do you identify so and so material is concern to so and so plant.

Ans: Using 0source system id.

26. How can we use compound attribute in reporting?

Ans: -

Compounding attribute lets you derive a unique data records in the reporting.

Suppose you have a cost center and cost accounts like this and you want to maintain proper relation:

Cost centers: 1000, 1001, 1002

Cost accounts: 9001,9002,9003

The cost accounts are not unique across cost centers and the master data will be over written.

So the cost accounts across cost centers cannot be differentiated.When you add the cost centers ac compounding attribute a unique record will be present. After compounding the records will look unique like below in reporting:

9001/10009002/1000

Page 14: BI Interview Questions Ver1.0

9003/10009001/10019002/10019003/10019001/10029002/10029003/1002Thus differentiating each cost account across cost centers uniquely.

27. Compounding objects and its purpose

Ans:- A compound attribute differentiates a characteristic to make the characteristic uniquely identifiable.

In the Compounding tab page, you determine whether you want to compound the characteristic to other InfoObjects. You sometimes need to compound InfoObjects in order to map the data model. Some InfoObjects cannot be defined uniquely without compounding.

For example, if storage location A for plant B is not the same as storage location A for plant C, you can only evaluate the characteristic Storage Location in connection with Plant. In this case, compound characteristic Storage Location to Plant, so that the characteristic is unique. One particular option with compounding is the possibility of compounding characteristics to the source system ID. You can do this by setting the Master data is valid locally for the source system indicator. You may need to do this if there are identical characteristic values for the same characteristic in different source systems, but these values indicate different objects.

Recommendation : Using compounded InfoObjects extensively, particularly if you include a lot of InfoObjects in compounding, can influence performance. Do not try to display hierarchical links through compounding. Use hierarchies instead.

Note : A maximum of 13 characteristics can be compounded for an InfoObject. Note that characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself. Reference InfoObjects If an InfoObject has a reference InfoObject, it has its technical properties: · For characteristics these are the data type and length as well as the master data (attributes, texts and hierarchies). The characteristic itself also has the operational semantics. · For key figures these are the key figure type, data type and the definition of the currency and unit of measure. The referencing key figure can have another aggregation. These properties can only be maintained with the reference InfoObject. Several InfoObjects can use the same reference InfoObject. InfoObjects of this type automatically have the same technical properties and master data. The operational semantics, that is the properties such as description, display, text selection, relevance to authorization, person responsible, constant, and attribute exclusively, are also maintained with characteristics that are based on one reference characteristic.

Example : The characteristic Sold-to Party is based on the reference characteristic Customer and, therefore, has the same values, attributes, and texts. More than one characteristic can have the same reference characteristic:

Page 15: BI Interview Questions Ver1.0

The characteristics Sending Cost Center and Receiving Cost Center both have the reference characteristic Cost Center. Assign Points if useful

Example : Typically in a organization the employee id are allocated in serial like say 101, 102 and so on. Lets your Organization comes out with a new employee id scheme where the employee id for each location would start with 101. So the employee id starting for india would be india/101 and for UK would be UK/101. Now note that the employee india/101 and US/101 are different. Now if someone has to contact employee 101 he needs to know the location without which he cannot uniquely identify the employee. Hence in this case location is the compunding attribute.

28. WHAT IS COMPOUNDING ATTRIBUTE? IN WHICH SENARIO WE CAN USE THAT ONE?

Ans:-This is like Composite Key in Table, i.e. with the combination of some Keys only you can recognize the unique report

Eg: If I combine Material and Bath then only I'll get Unique record in material master, this is the best and simple example. So her I use batch as Compounding objectOrTechnically Compounding key is like a super key, normally there will be only one primary key for any table if you want 2 or more fields to act as primary key then this compounding key will help you.

Scenario: we have materials coming from different plants and if you want to analyze which material is getting from which plant then you have to go for compounding key here plant is a super key for material.

29. I have 0material it have 10 attributes and 0customer it has 15 attributes, and I have 0document no. How do you design dimension table and on what basis you will create it if you create line item dimension?

Ans:-By analyzing 1: m, M: n relations and after that I will check by using Sap_infocubes_designs program. If dim table size is more than 20% of the fact table size I will go for line item. If I create line item dimension Sid table is connected to fact table. And line item din table data stored in fact table field called RSSM.

30. Data loading with 1:N & M:N relationship  

Ans:-

In case of master data, the char info objects with master data attributes take care of 1:N relation.For e.g for material master data, the material number would be the key based on which the data would be loaded for each new material. In case of transaction data for 1:N relation, you may take up DSO in which the primary keys can be made the key fields of DSO based on which the repeating combinations would be overwritten. You may use an infoset when

Page 16: BI Interview Questions Ver1.0

there is a requirement of inner join (intersection) or left outer join between master data/DSO objects. The infocube can be used for M:N relation.For union operation between two info providers , multiprovider may be used.

31. How will u know that a dimension can b a lineitem Dim b4 loading into Cube ?

Ans:-

If you display your cube, if you click on Dimentions Tab, you could see all the dimentions. Beside the Dimension Table Name, you have a checkbox fro Line Item Dimention (3.X).

For 7.0, you need to look at the properties of the Dimentions using Context Menu.

32.What is the mirror field at r/3 side for 0record mode?

That is Rocancel

33.What is the diff between activate master data and chagerun? Ans:-

Attribute Change Run and Master Data Activation both functions uses for Activating the Master Data.

When you go for Attribute Change Run screen, you will able to activate the Master Data for multiple objects at a time and it schedule a background job.

But when go for Master Data activation option by righ clicking on the object and say, it is possible activate the master data for a single object.

and we can use the Attribute change run option in the process chains, there is a process type for attribute change run.

Generally in process chains when you are loading any master data object, system will give the dependent processes after the info package that is attribute change run.

34. How can we do the Partition? Ans:-

There are two types of partitioning physical partitioning which is done at database level and logical partitioning which is done at data target level.

Page 17: BI Interview Questions Ver1.0

InfoCube partitioning with time characteristics 0calmonth & Fiscper is physical partitioning.

Logical partitioning is partition of Infocube i.e., dividing the cube into different cubes and create a multiprovider on top of it.

In the Info Cube maintenance choose Extras ->DB Performance Partitioning, and specify the value range. Where necessary, limit the maximum number of partitions. Note: You can only change the value range when the InfoCube does not contain any data.

kindly check F table and E table

In 3.5

You can drop empty partitions using the program “SAP_DROP_EMPTY_FPARTITIONS”

35. How does Archiving the data in SAP BI?

Archiving is used to store your data at a remote location to improve the performance in BI. (or)

Archiving is a process of moving the data from the sap database to storage system which is not required online and archived data can be read offline when ever user required.

Archiving helps to increase more database size, improvement of the system performance will take care to a greater extant and cost effectiveness for the client with respect to hardware.

We use archiving process in various SAP application areas. We can archive

Page 18: BI Interview Questions Ver1.0

Master data and Transactional datas.

Master data such as Customer master data, Vendor master data, Material master data, Batch master data and so on...

Transaction data such as Sales order, Delivery document, shipment document, Billing document, Purchase requisition, Purchase order, Production Order, Transfer order, Account receivables, Account payables, and so onFollowing steps should be followed for Archiving in 7.0

1. Goto transaction -: RSDAP2.give info provider name & type- create.3: goto General Settings - Give archiving object name.4. Selection Profile tab- shedule time.5. ADH (tab)- Specify logical file name.6. Activate.

make sure you copy your archiving object name.7. goto t-code : SARA give your object name.& click "write request"8. now create a variable & click on variable & click on MAINTAIN.9.select a field Then- CONTINUE10.goto "FURTHER RESTRICTION(tab)"give a value11. goto to processing options: click on production mode.12.save-13. Attribute (tab):give a name & save14.give start date & print parameters.15. Execute.

36. Can you explain a life cycle in brief

Ans:-

When they refer to the full life cycle its the ASAP methodology which SAP recommends to follow for all its projects.

1.Project peparation. 2.Business Blueprint. 3.Realization. 4.Fit-Gap Analysis 5.Go-Live Phase.

Normally in the first phase all the management guys will sit with in a discussion . In the second phase u will get a functional spec n basing on that u will have a technical spec. In the third phase u will actually inplement the project and finally u after testing u will deploy it for production i.e. Go-live .

You might fall under and get involved in the realization phase . If its a supporting project u will come into picture only after successful deployment.

Page 19: BI Interview Questions Ver1.0

37. What are the five ASAP Methodologies?

Ans: - Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.

1. Project Preparation: In this phase, decision makers define clear project objectives and an efficient decision making process ( i.e. Discussions with the client, like what are his needs and requirements etc.). Project managers will be involved in this phase (I guess).

A Project Charter is issued and an implementation strategy is outlined in this phase.

2. Business Blueprint: It is a detailed documentation of your company's requirements. (i.e. what are the objects we need to develop are modified depending on the client's requirements).

3. Realization: In this only, the implementation of the project takes place (development of objects etc) and we are involved in the project from here only.

4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user training etc.

End user training is given that is in the client site you train them how to work with the new environment, as they are new to the technology.

5. Go-Live & support: The project has gone live and it is into production. The Project team will be supporting the end users.

38. What is index and how do you increase performance using them

INDEX: They are used to improve the performance of data retrieval while executing the queries or work books the moment when we execute and once we place the values in Selection criteria at that time the indexes will act as retrieval point of data ... and it will fetch the data in a faster manner...

Just common example if you can obeserve any book which has indexes .. This indexes will give a exact location of each topic based on this v can easily go to that particulur page and we will continue our things. Similar mannner in the data level the indexes are act in BW side...

 

39. Example of errors while loading data and how do u resolve them

1. Time stamp error. sol: activate the data source and replicate the data source and load.

2. Data error in PSA. sol: Modify the in PSA error data and load.

Page 20: BI Interview Questions Ver1.0

3. RFC connection failed. sol: raise the ticket to BASIS team to provide the connection.

4. Short dump error. sol: delete the request and load once again.

a) Loads can be failed due to the invalid characters b) Can be because of the deadlock in the system c) Can be becuase of previuos load failure , if the load is dependant on other loads d) Can be because of erreneous records e) Can be because of RFC connections.....(sol: raise the ticket to BASIS team to provide the connection)  f) Can be because of missing master data.  g) Can be because of no data found in the source system  h) Invalid characters while loading. When you are loading data then you may get some special characters like @#$%...e.t.c..then BW will throw an error like Invalid characters..then you need to go through this RSKC transaction and enter all the Invalid chars and execute..it will store this data in RSALLOWEDCHAR table..Then reload the data..You won't get any error because now these are eligible chars..done by RSKC. 

i) ALEREMOTE user is locked Normally, ALEREMOTE gets locked due to a sm59 RFC destination entry having the incorrect password. You should be about to get a list of all sm59 RFC destinations using ALEREMOTE by using transaction se16 to search field RFCOPTIONS for a value of "*U=ALEREMOTE".  You will need to look for this information in any external R/3 instances that call the instance in which ALEREMOTE is getting locked as well. 

j) Lower case letters not allowed. look at your infoobject description: Lowercase Letters allowed

k)extraction job aborted in r3 It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.

l)datasource not replicated If the new DataSource is created in the source system, you should replicate this DataSource in 'dialog' mode. During the replication, you can decide whether this DataSource should be replicated as the 3.x DataSource or as the new DataSource. If you do not run the replication in 'dialog' mode, the DataSource is not replicated.

m)ODS Activation Error ODS activation errors can occur mainly due to following reasons  1. Invalid characters (# like characters)  2. Invalid data values for units/currencies etc  3. Invalid values for data types of char & key figures.  4. Error in generating SID values for some data 

Page 21: BI Interview Questions Ver1.0

40. What is reconciliation ?

Reconciliation is nothing but the comaprision of the values between BW target data with the Source system data like R/3, JD edwards,Oracle,ECC or SCM or SRM.

In general this process is taken @ 3 places one is comparing the info provider data with R/3 data,Compare the Query display data with R/3 or ODS data and

Checking the data available in info provider kefigure with PSA key figure values 

41. Want to know who are all the users are executing that report ?Ans:-

Use these tables as input to ur data source RSZCOMPDIR or V_CMP_JOIN and build generic datasource on this table(s) and load it DSO and then you can build the query. you can look in to these tables where you can find details on query name and user ran.

(or)Use BW Statisrtics CUbes.

See DataSource in BW in RSA5 and install it and also see the Cubes starting with 0TCT_* for that DS's 0BWTC_C020BWTC_C030BWTC_C040BWTC_C050BWTC_C090BWTC_C11

42. How can we delete the selective deletion in data targets?

Ans: -

Using the t-code “DELETE_FACTS “

Give DELETE_FACTS Tcode in Command field screen and enter and give InfoCube name and select Generate selection program option and then execute.

43. Please tell me the disadvantages of the following:

(1) AGGREGATES

Page 22: BI Interview Questions Ver1.0

(2) COMPRESSION

(3) IC PARTIONING

(4) INDEXES

(5) LINE ITEM DIMENSIONS

and How to rectify them?

Ans:-

Disadvantages of

(1) AGGREGATES

Even though Aggr are used for performance, but it will decrease the same when you create more aggregates

Until Rollup takes place the query won't hit the aggregate Cube

Aggregates- its main disadv is that is stores data fisically as a redundant, more aggregates will cause waste of memory.

(2) COMPRESSION

Once Cube is compressed, then all the request number will be removed and hence deletion by request id is no more possible.

Compression- compressed request cannot be got back to normal, deletion is difficult

(3) Partition

Partition Handling for several thousand partitions is usually impacting DB performance.

IC pratioin- partion cannot be done after data loaded(3.X) but repartion is possible in BI7.0

(4) INDEXES

If you don't drop the index before loading then the data load will be slow

If you don't create the index before the reporting then the reporting will be slow

Index- for large volume of data create and delete index consume lot of time

(5) LINE ITEM DIMENSIONS

This can be set when you have one only characteristic in the Dim table.

Line item - more number of line item cannot be used as number char used will be reduced.

44. Views in process chain

Ans:- 3 views vuntayee PC loo Check view, log view and job overview Earlier in 3.0 it was planning view instead of check view Now we have 3 views only

Page 23: BI Interview Questions Ver1.0

45. In Process chains what is the difference between Repair and repeat?Ans:-

In process chain, a DTP is triggered. After some of the records are processed, it ended with error. Now in the process chain you will have option to repair (not repeat).

Check the table - RSPROCESSTYPES. Here the repair/repeat option for the process type is mentioned.

Repair - will continue with same instanceRepeat - will create new instance.

46. Purpose of No marker update ?

Ans:-

Marker update is just like check point ie it will give the snapshot of the stock on a particular date when it was updated. As we are using Noncumulative key figure it will take lot of time to calculate the current stock for example at report time. To overcome this we use marker update.

47. What is the Snapshot scenario ?Ans:-

1) To explain in simple terms a "snapshot" scenario is the data at a given point of time. For example lets say there is a employee snapshot cube. If you load data for the month of may then it is the "May snapshot" and now if you load again in June then the data is as of June. Both the requests would be maintained. These cubes are more for comparing the figures between 2 time periods.

2) A non-cummulative key figure contains two cummulative key figures to store inflow and outflow of stock movement. If you include a non-cumulative key figure in cube, value of this KF is not stored in the fact table. Instead values of all the incoming stock is updated in inflow KF and similarly stock moving out is updated in outflow KF. In fact in update rule also, you will see these two KFs and not the non-cummulative KF.

Now, there is a concept of marker in NC cube,whic is updated while compression( you can find more info in sap help on this). It acts as reference point while query execution. At the time of query execution, value of NC KF is calculated by taking marker as reference, adding inflow and subtracting outflow. Hence suppose a lot of movement in stock has haapened, then the performance of query will be hampered as a lot of calculation is involved.

On the other hand, in snapshot method, stock movements are not loaded. Instead the status at a particular time is taken. Suppose you want to see monthly stock level, then you'll upload data every month. Status of the stock at that particulat time is loaded. As this value is directly stored in cube, no calculation is required at the time of query execution.

Check this link for more info on NC KF

Page 24: BI Interview Questions Ver1.0

http://help.sap.com/saphelp_nw04/helpdata/en/80/1a62dee07211d2acb80000e829fbfe/frameset.htm

48. How and where can I control whether a repeat delta is requested?

Ans:- Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be of type 'Repeat'. If you need to repeat the last load for certain reasons, set the request in the monitor to red manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the data targets concerned before.

49.If the Delta load fail then how will I do Delta repetition?

Ans:-

Repeat Delta is used when the previous delta load fails. Repeat Delta picks up the previous delta as well as the current delta.

Steps:

Make the technical status of the request to Red, though the request is Red.Delete the Red Request from the data target.Trigger/schedule the Infopackage.When Infopackage is triggered, a pop will be shown which asks whether you want to repeat the delta. So, read the pop up message and click on "Request Again".

Datapacket Size Settings:

This can be done in 2 ways. One is Global (All Loads) and the other option is specific to one Load.

Global:

Tcode : RSCUSTV6

Packet Size: This option refers to the number of data records that are delivered withevery upload from a flat file within a packet. The basic setting shouldbe between 5000 and 20000 depending on how many data records you want toload.

Specific to one Load:

Go to the Infopackage.Click on Scheduler in the Menu on the TopSelect "Datas Default Data Transfer" and give Maximum SIze of a data packet and number of Data Packets. Click on save

Page 25: BI Interview Questions Ver1.0

50. If the data coming from different data source how you going to identify the data from which data source it is came from?

Ans:-

Data is coming from 2 datasources, you will just check info package for that request, you can find datasource name.

51. If the data coming from 2 data source, how you going to separate them in report?

Ans:-

If we are taking data from two data sources and want to separate them at report level, we need to define an indicator while data load which populates a constant value D1 and D2 in data target .In report we can give selection on this indicator.

52. Reference object, template, and compound attr?  Ans:-Reference Object: Create a new info object but the master tables of the info object are the same of the referenced info object.Example:I hace info object 0Country and i need to create Zcountry, i create the Zcountry reference to 0Country, doing these it's not necessary update the attributes of the Zcountry, are the same of the 0Country, it's easy administrate the country attributes.

Template: Create a copy. Create a new info object copy from other info object.

Compound attribute: It's a method to expand the key of the info object.Example:0costcenter (Cost center). We can have the same ceco but in diferent CO Areas. To represent this in BI we add the CO Area like compound attr of the ceco. It means that the ceco has a dependency with the CO Area.

53. Aggregates -- SAP BW Query Performance

Ans:-

Aspects of SAP BW Query Performance

  Sound Data Model  Dimensional Modelling  Logical Partitioning  Physical Partitioning  BW Reporting Performance  Aggregates

Page 26: BI Interview Questions Ver1.0

  Pre-calculated Web Templates  OLAP Cache

Definition: Aggregates are materialized subsets of InfoCube data, where the data is pre-aggregated and stored in an InfoCube structure.Definition: Aggregates are materialized subsets of InfoCube data, where the data is pre-aggregated and stored in an InfoCube structure.Purpose: To accelerate the response time of queries, by reducing the amount of data that must be read in the database for a navigation step.

We can build the Aggregates mainly decided if the database time is >30%. U can get this time ST03.

Aggregates can be created: For Basic InfoCubes On dimension characteristics On navigational attributes On hierarchy levels Using time-dependent navigational attributes (as of BW 3.x) Using hierarchy levels where the structure is time-dependent (as of BW 3.x)

Defining Aggregates  * Group according to characteristic or attribute value H Group according to nodes of a hierarchy level F Filter according to fixed value

Aggregation Using HierarchiesTime-independent hierarchies are stored outside the dimension, in this example, in a table called /BI0/ICOUNTRY

OLAP Processor: Query SplitterThe split of a query is rule-based: Parts of the query on different aggregation level are split. Parts with different selections on characteristic are combined. Parts on different hierarchy levels or parts using different hierarchies are split.

After the split, OLAP processor searches for an optimal aggregate each part. Parts which use the same aggregate will be combined again (in some cases it is not possible to combine them).

Runtime Suggestion based on the last entry of the database tables RSDDSTAT/RSDDSTATAGGRDEF for the current user

Suggestion based on the database tables RSDDSTAT/RSDDSTATAGGRDEFSuggestion based on the InfoCube BW StatisticsChoose

Page 27: BI Interview Questions Ver1.0

Proposals can be restricted to queries with a minimum runtime SuggestionChoose the period to be used for the proposals Proposals

Aggregates suggested from BW statistics get the name STAT <N>

Aggregates suggested from BW statistics get the names MIN <N> and MAX <N>

Building Good Aggregates

Tips for Building (and Maintaining) Good Aggregates: Relatively small compared to parent InfoCube Try for summarization ratios of 10 or higher??Find good subsets of data (frequently accessed) Build on some hierarchy levels, not all Not too specific, not too general -should serve many different query navigations Consider "component"aggregates??Should be frequently used and used recently (except basis aggregates)

Building Bad Aggregates

 Characteristics of Bad Aggregates:  Too many very similar aggregates  Aggregates not small enough (compared to parent cube)  Too many "for a specific query"aggregates, not enough general ones  Old aggregates, not used recently??Infrequently or unused aggregates  Exceptions:  A large aggregate containing navigational attributes may benefit performance despite its size (but remember the tradeoff)  Basis Aggregate may be large and may not be used for reporting but still be useful for maintenance

 Analysis Tools: Workload Monitor (ST03)   Scenario  Find out the queries with the worst performance and try to optimize them.  Useful Features   Expert Mode   BW System Load -->Analysis of table RSDDSTAT   Check following parameter values:   Check queries with highest runtimes and check where most time has been consumed    OLAP init    DB??OLAP    Frontend    Check for ratio of selected to transferred

How to Tell if an Aggregate Will Help

 Do the following steps Call query or InfoCube overview in technical content or ST03

Page 28: BI Interview Questions Ver1.0

 Sort by mean overall time to find queries/InfoCubes with highest runtimes Calculate the KPI 'aggregation ratio'= number of records read from DB / number of records transferred Check quota of database time to total runtime  As a rule of thumb, an aggregate will be helpful if the query statistics show

  Summarization Ratio > 10Summarization Ratio > 10,I.e. 10 times more records are read than are displayed, AND  Percentage of DB time > 30%Percentage of DB time > 30%,I.e. the time spent on database is a substantial part of the whole query runtime

Overview: Reporting Performance Analysis Tools

 Table RSDDSTATBW Queries of BW STATISTICS Using table RSDDSTAT as InfoSourceTable

 ST03Collecting information from table RSDDSTAT Function moduleRSDDCVER_RFC_BW_STATISTICS 

 BW Workload Analysis -ST03

Creation of Aggregates:-

Cuberight click-maintain aggregates--now pop up window will com and select create by your self-- next ----left-hand side--you will find info objects --drag and drop to right hand side--and click on activate.

In process chain we can add aggregates in data target administration--you will find Roll up of filling Aggregates---To improve query performance always remember:

1. Your line-item dimensions should be 15% of your fact table.2. Cube should be partitioned.3. Aggregates should be build.4. Compress your cube immediately.

54. Difference between the between Migration and up gradation projects?

Ans:-

Upgradation generally refers to upgrading from an Old version to higher version of BW/BI itself.For eg from BW 3.5 to BI 7.0.

Migration refers to migration from a different technology to BW.For eg from Oracle EDW to BW.

Page 29: BI Interview Questions Ver1.0

55. Referential integrity  Ans:-

A feature provided by relational database management systems (RDBMS's) that prevents users or applications from entering inconsistent data.

Most RDBMS's have various referential integrity rules that you can apply when you create a relationship between two tables.

For example, suppose Table B has a foreign key that points to a field in Table A. Referential integrity would prevent you from adding a record to Table B that cannot be linked to Table A. In addition, the referential integrity rules might also specify that whenever you delete a record from Table A, any records in Table B that are linked to the deleted record will also be deleted. This is called cascading delete. Finally, the referential integrity rules could specify that whenever you modify the value of a linked field in Table A, all records in Table B that are linked to it will also be modified accordingly. This is called cascading update.

In sap we use it during flexible updation...to check the data records of transaction data and master data.

In other words, to check before loading of data, that whether loading will be properly or not.

We will check(tick) the option in the maintainance of the “infosource--> communication structure”

=============================================

When you load data through flexible infosource into a target it is possible to enable the referential integrity.

In other words and not going into RDBMS consideration (which are absolutely true) when you load a record like

0PLANT = XYZ0MATERIAL = 4711.

With Ref Integrity enabledafter each transfer rule and just prior passing the data to the communication structure, the system will check if 0PLANT XYZ and 0MATERIAL 4711 exists as Master Data. If not, the load will fail with referential integrity.

With no ref integrity enabled:

The data will pass through the comm structure.Just before posting the data in the target, the system will check again if the corresponding master IDs exists for these IObjs.

This check is absolutely normal since BW NEEDS to have a SID (surrogate ID)

Page 30: BI Interview Questions Ver1.0

for EACH of the master IDs prior posting to a datatarget. In cube dimensions, the SIDs of master are posted as well as in the attributes of an IObj (/BI*/X* table)

If your infopackage is set with "allow master creation while loading" then it will create the master data IDs with their corresponding SIDs.

If not, it will fail again.

therefore two spots for referential integrity checking:

1. In the comm str of a flex Isource (object by object=; the best. Note that this allows you also to prevent "garbage" to be posted! if a value is not expected (even form R/2) BW will complain...

2. At IPack level globally.

All this makes sense:1. if you don't have all attributes and texts for 0MATERIAL 4711 loaded before what's the point of posting it in a target and report on it?2. When the system needs to create master data IS and SIDs it impacts the loading time.

In summary Master data should always be loaded before transactional data and in the right order: if 0MATL_GROUP is an attribute of 0MATERIAL then it should be loaded prior 0MATERIAL.

Of course this is the whole challenge of datawarehousing and doing this right will be the key success factor of your project

56. What is the importance of the table ROIDOCPRMS? Ans:- It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.

57. When is IDOC data transfer used? Ans:- IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.

58. What is the function of 'selective deletion' tab in the manage->contents of an infocube?

Ans: - It allows us to select a particular value of a particular field and delete its contents.

Page 31: BI Interview Questions Ver1.0

59. What is open hub service?

Ans: - The open hub service enables you to distribute data from an SAP BW system into external data marts, analytical applications, and other applications. With this, you can ensure controlled distribution using several systems. The central object for the export of data is the Infospoke. Using this, you can define the object from which the data comes and into which target it is transferred. Through the open hub service, SAP BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.

60.SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?Ans: - But how is it possible? If you load it manually twice, then you can delete it by

requestID.

61. How to know in which table (SAP BW) contains Technical Name / Description and creation data of a particular Reports. Reports that are created using BEx Analyzer.

Ans:- There is no such table in BW if you want to know such details while you are opening a particular query press properties button you will come to know all the details that you wanted.

You will find your information about technical names and description about queries in the following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT)

62. Can we create the cube without time characteristic ?

Ans:- Yes, but what is the use?. You can't analyze the data in Cube.

63. What are the characteristics in DATAPACKET ID DIMENTION TABLE?

Ans:- For every load System will generates DataPackets Id so it will store in the Dimensional Dable of DataPackage, It is System defiend we can't change it.It will have the following informationChange Run ID (0CHNGID),Record type(0RECORDTP),Request ID(0REQUID)

64. Can we create DSO without DATAFIELDS?Ans:- Yes, but no use.

65. Char1 is an attribute of Char2 already and data is existed. Now as request Char1 has to be change to Nav. attri for Char2 from display attribute. So after the change what else have to do or check because Char2 has been exited in many other InfoProviders

Answer: mark Char 2 as Nav Attr and activate the object, a warning will appear saying Char 1 exsists into many providers, ignore it and activate Char1.

Page 32: BI Interview Questions Ver1.0

66. If I need to switch on one nav. attri in an DSO after check it on what else I have to take care because lots of data are stored in this DSO already ?

Answer: Go to the Infoprovider for which you needs Char 2 as a Nav Attr. and in Edit mode of infoprovider open Navigation Attributes tree and mark Char2 as Navigation Attr and if it is on a Multiprovider do the identify / assign to provide assignment from where this value will come from.Sorry forgot to mention, if you have a DSO mark it as navigational in DSO and if there is a Multiprovider on top of DSO mark it as Nav in Multiprovider also(depends where u want to use it / where ur query is)Hope this helps.

A navigational attribute is never physically stored in the cube / info provider. So essentially you need not reload or change any data. When you change CHAR1 as a nav. attribute in CHAR2, this change is reflected in all infoproviders that use CHAR2. You will see that attribute as CHAR2__CHAR1 in the navigational attributes section of the infoprovider. They will by default not be used. If you want to use them check the check box which lies next to the navigational attribute in the navigation attributes section. This enables the navigational attribute for the cube/ infoprovider.

Since there is no physical change in data, you need not perform any reloads etc.

67. You mentioned Infoprovider which contain such char.How about Master data. I mean e.g. Char1 is updated to Nav.attr for Char2 from diaplay attr. If Char2 contains data already then how system will maintain X table of Char2 for char1 because there is no such field in X table before the modification.

Ans: - When you need to change from display to navigational attribute, you are essentially changing the info object. So the object will go inactive. CHAR1 is switched to nav. then CHAR2 goes inactive. Now when you reactivate this info object, the tables are recreated according to the new structure. So the X table will now have the CHAR1 field. Data would also be copied from the P to the X table.

Hope this answers your question.

68. IN CUBE KEY FIGURES PROPERTIES WHAT IS DISPLAY?Ans:-This is used how you want to display the values in repots, i.e. Decimal Places, and other properties.

69. HOW MANY TABLES ARE THE TABLES IN FACT TABLE?Ans:- E-Fatc Table and F-Fact TablOnce you compress the Cube the Data will move from F fact table to E fact Table

70. HOW TO CREATE GLOBLE QUREY AND LOCAL QUERY?You can change the view in Rpeort i.e. Globle and Local in BW 3,5.

71. WHAT IS CURRENCY TRANSLATION TALBE NAME?

TCURR and you can workout in various places Tansfer Rules/Update Rules/Transformations and Report level

Page 33: BI Interview Questions Ver1.0

72. WHAT IS REALIGNMENTOnce you load the master Data then you need to run the Change run Attribute. So it will adjust the data as per changes.

73. WHAT IS THE DIFFERENCE BETWEEN LO & FI DATASOURCES? AND WHAT ARE THE DIFFERENCE BETWEEN ABOUT DELTA?

Ans:- LO - you need to Fill setutables fo rInit loads , and execute V3 job for deltas ,LO transfers the data from Update Q to Delta Q and then BW but FI won't have this kind of scenarios.

74. IN ODS I DELEATED ONE REQUEST CAN I GET BACK IT AGAIN?Ans:- Yes, if you have the complete data in PSA. You can ,load from PSA to ODS.

75. HOW MANY LINE ITEM DIMENSION CAN WE CREATE ?

No lImit. you can have 13 Line item , but having too many line item will leads to performance issue. , we have used a max of 4 in our project.

76. Analysis Process Designer (APD)Tcode ? Ans: - RSANWB

77. How much customization was done on the InfoCubes have you implemented?

Ans:-In some cases we added navigational attributes (eg Material type on inventory cube), in some other cases there was extractor enhancement to add additional characteristic, and this additional characteristic was also added to the cube (eg Sales-channel in Sales overview).

78. Key performance indicator (KPI)Ans:- A performance indicator or key performance indicator (KPI) is a measure of performance.[1] Such measures are commonly used to help an organization define and evaluate how successful it is, typically in terms of making progress towards its long-term organizational goals[2]. KPIs can be specified by answering the question, "What is really important to different stakeholders?". KPIs may be monitored using Business Intelligence techniques to assess the present state of the business and to assist in prescribing a course of action. The act of monitoring KPIs in real-time is known as business activity monitoring (BAM). KPIs are frequently used to "value" difficult to measure activities such as the benefits of leadership development, engagement, service, and satisfaction. KPIs are typically tied to an organization's strategy using concepts or techniques such as the Balanced Scorecard.

The KPIs differ depending on the nature of the organization and the organization's strategy. They help to evaluate the progress of an organization towards its vision and long-term goals, especially toward difficult to quantify knowledge-based goals.

A KPI is a key part of a measurable objective, which is made up of a direction, KPI, benchmark, target, and time frame. For example: "Increase Average Revenue per

Page 34: BI Interview Questions Ver1.0

Customer from £10 to £15 by EOY 2008". In this case, 'Average Revenue Per Customer' is the KPI.

KPIs should not be confused with a Critical Success Factor. For the example above, a critical success factor would be something that needs to be in place to achieve that objective; for example, an attractive new product.

Example :

Performance indicators differ from business drivers & aims (or goals). A school might consider the failure rate of its students as a Key Performance Indicator which might help the school understand its position in the educational community, whereas a business might consider the percentage of income from return customers as a potential KPI.

But it is necessary for an organization to at least identify its KPIs. The key environments for identifying KPIs are:

Having a pre-defined business process (BP). Requirements for the business processes. Having a quantitative/qualitative measurement of the results and comparison

with set goals. Investigating variances and tweaking processes or resources to achieve short-

term goals.

A KPI can follow the SMART criteria. This means the measure has a Specific purpose for the business, it is Measurable to really get a value of the KPI, the defined norms have to be Achievable, the KPI has to be Relevant to measure (and thereby to manage) and it must be Time phased, which means the value or outcomes are shown for a predefined and relevant period.

In most of the cases, key figures are the KPIs. Else we will create them by creatign RKF or CKF.

For eg.g

You want to see perormance(Sales) of the store 9999 for month 001.2010. You want to see how good it is doing.

So you will create a report which will show you the Sales of the store for month 001.2010..

This sales is nothing but a KPI.

79. Difference between doc date, posting date and invoice date  Ans:- Posting Date: Date which is used when entering the document in Financial Accounting or Controlling. The posting date can differ from both the entry date and the doc date.

Document Date: The document date is the date on which the original document was issued. Ex: Inv date, Bill date etc.,

Page 35: BI Interview Questions Ver1.0

Invoice Date : Usually the date when goods are shipped. Payment dates are set relative to the invoice date.(Or)This is related to sales where customer sales document in SAP (VA01).

Document date is the date where sales document is created. Posting date is loading date where data gets posted to BW (Check in VA03 transation schedule line for Material) Invoice date is billing date where the ordered is billed.

80. Exception AggregationAns:- There are two levels of exception aggregation, this is because there are two ways to calculate and display values - one is to calculate and store the values another is to calculate the same at runtime.

The calculate and store during load is not always possible - either due to the conditions on the calculation or due to constantly changing data.

One typical example is consider sales for a customer.If todays sales for the customer is zero then take the previous days sales else take todays sales. And if both are zero - take zero. This is typically very expensive to implement at a data load level and is heretic to consider if your data volumnes are too high. this ideally should be done at a query level

Another example is Calculate average sales for the customer as invoive amount by number of lines sold across invoices. Meaning if the customer has bought 10 lines across 20 bills then average sales = invoice amount across all these bills / 10 even though the total number of lines may be 40 with repeats across invoices.... This would be an exception aggregation at the key figure level. Since this ideally cannot be done at a query - when you are looking at higher level data that invoice and line item level information and for variable time periods.

Page 36: BI Interview Questions Ver1.0
Page 37: BI Interview Questions Ver1.0

What happens when such exception aggregation is executed :# The data is fetched from the cube / DSO at the most detailed level including the characteristic on which exception aggregation is done. This amounts to huge amounts of data  ( for very large cubes / DSO's)

1. The calculation is done by the OLAP processor and not while data is fetched - the data is fetched and then the calculation is done on the OLAP processor - meaning you will not see any of these formulae in the run schedule / sql query in RSRT.

Ideally for very large cubes / DSO's unless you do not have a choice - do not useexception aggregation - try and use data load calculations if possible.If it cannot be done without exception aggregation then follow the thumb rules:

1. Compress the cube regularly 2. Update statistics regularly - preferably after each data load.. 3. Watch statistics very closely for DB time and OLAP time 4. Observe the query costs in RSRT by going through the execution plan 5. If possible cache the query so that data fetch time is reduced.

(OR)

Exception Aggregation is an extremely powerful concept when developing a query in BW. It can help aggregate data to the level that you want irrespective of the level at which the data is stored. For example, you may have an Infoprovider at order item level and you may want to report on the number of items and the number of orders in that InfoProvider. How do you achieve this without creating another Infoprovider at order level? The answer is simple. Follow these steps:

 1. Assuming that the lowest level of detail in this DSO is order item, create a new calculated key figure in query designer.

  2.   Edit the Key Figure as shown below.

Page 38: BI Interview Questions Ver1.0

 

3.         Then click on the Aggregation tab and set the Exception Aggregation as shown below. In the Ref. Characteristic box select the Order Characteristic. This key figure will then output the number of unique orders thus allowing you to report on the number of orders on this DSO even though the DSO is at order item level.

Page 39: BI Interview Questions Ver1.0

 

4.  Add this key figure to your query and run it!

 You can also use exception aggregation to report on First, Last, Maximum and Minimum values.

81. THE DATA SOURCE IN PRODUCTION .DELTAS ARE RUNNING ? WHEN U R ENHANCE THE FIELD TO DATA SOURCE . HOW YOU DID TRANSPORT THAT DATA SOURCE?WHAT ABOUT DATA & LOADS?

Ans:-Nothing will happen to data. We are Transfering only the Structure.And For that new field Historical data is not available and after moving to production then onwords u will get data for that field. If u have the Historical data also for that field then u have to delete all the data and have to reload to get Historical data for that new field...

Page 40: BI Interview Questions Ver1.0

82. How to find all the Inactive info objects of the BW system and how to activate all of them?

Ans:-

Table RSDIOBJ with OBJVERS = M

D will give you delivered objects A - Active Objects.

Also some objects will be in both A and M state - make sure you take only the M versions for those that do not have an A version.

A- ActiveM - ModifiedD - Delivereda) The Infoobjects are Inacitve status, those have Disable appearance the Icons.b) You can activate the all objects by using' RSDG_IOBJ_ACTIVATE'

83. Performance tools?

Ans:-

Trace (ST05) , RSRV ,RSRT, RSRTRACE, ABAP Trace (SE30)Database:

1. Data Model

2. Query Definition

3. Aggregates

4. OLAP Cache

5. Pre-Calculated Web Templates

6. Compressing

7. Indices

8. DB Statistics

9. DB and basis (Buffer) Parameter

OLAP :1.Data Model

2.Query Definition (including OLAP features)

3.Aggregates

4.OLAP Cache

5.Virtual Key Figures / Characteristics

6.Authorizatio

Frontend :

Page 41: BI Interview Questions Ver1.0

1.Network

2.WAN and BEx

3.Client Hardware

4.VBA / Java

5.Documents

6.Formatting

7.ODBO/ 3rdpartySQL1

85. What is the Rule group?

Ans: - A Rule Group is a group of transformation rules. It contains one transformation rule for each key field of the target. A transformation can contain multiple rule groups.

Rule Type Initial can be set only for key fields. The field is not filled; it remains empty.Rule Type No Update can be set only for non-key fields.The key figure is not updated in the InfoProvider

For Example:

1. The source contains three date characteristics: Order date, Delivery date and Invoice date.

The target only contains one general date characteristic. Depending on the key figure, this is filled from the different date characteristics in the source.

Create three rule groups which, depending on the key figure, update the order date, delivery date, or invoice date to the target.

86. Technical name of PSA in BI7?Ans:-

Page 42: BI Interview Questions Ver1.0

Right click on datasource --> Manage. At the top of the window it will show you PSA table name.

Or you can check in RSTSODS table. Give the datasource name suffixed by * , it will show you PSA table name.

Or check in RSA1OLD tcode as in BW3.5

Select any infocube rightclick - select "show data flow" -u will get the data flow in the rhs screen select " Technical Name ON/OFF" switch - select "Zoom In" to see te maximum - wiil get both the technical name and table name

87. Cube designAns:-We should design a cube based on our requirement. It should not be based on the number of infoobjects. I will just give an overview of the basic rules of incube. Once you know these truths you will get a better picture about it.

1. An infocube can contain a maximum of 16 dimensions. Out of these 16, 3 dimension will be there by default. They are time, data packet and unit.

2. So we can create 13 dimensions. And each dimension can contain 248 characteristics. So you can have 13 * 248 characteristics in your fact table. Wow thats a lot of infoobjects.

3. When you put characteristics in an infocube you should club them into an infocube in such a way that they should have a parent child relationship. that is, they should have a 1 : N relationship.

4. Most importantly you should not put charcteristics that have a M : N relatiionship together in an infocube. To be more specific I will give an example. For example you should not put customer and material in the same dimension. This is because a customer can buy many materials and a material can be bought by many customers. So there exists M : N relationship. As a result dimension table will grow rapidly and it will result in bad performance.

Rule of thumb is that the size of dimension table should not exceed 10 to 15 % of the size of the fact table.

In your case just analyse your infoobjects and include them in dimensions according to the rules specified above. There are much more facts to be taken care of when designing an infocube. You will get lots of documents in net. Hope this will help you.(or)We have only use 13 dimensions among the 16 dimensions of info cube..the criteria of selecting different info objects in to different dimensions.. you first check the relationships between the info objects..there will be mainly there types of relationships that info objects have there are

1.if the relation is 1 to many or many to one or 1 to 1,..then you can insert those info objects in to a single dimension

2.if the relationships is many to many then you must put those info objects in to different dimensions.. else dimension table's size will grow more than 20% of Fact table...

Page 43: BI Interview Questions Ver1.0

3 Leave one dimension for future enhancements....

88. What is high cardinality flag? When we have to use?Ans:-Normally when the dimension table space occupies more than 20% of the fact table then that dimension can be declared as line item dimension.

u can find the RSDEW_INFOCUBE_DESIGNS program and give the name of info cube , then u can’t know each dimension how much space occupied based on that u can declare line item dimension at that time u can use the high cardinality

89. We have 2 types of Indices;

1) Primary Index2) Secondary Index

Primary Index: Automatically System generatedSecondary Index: we have 2 types in Secondary Index

a) Bitmap Indexb) B-Tree Index

When the cardinality of Dimension table is less than the 20% of the Fact table we go for Bitmap IndexWhen the cardinality of Dimension table is more than the 20% of the Fact table we go for B-tree Index

am i right? if i am wrong please correct me. and my questions are,

1) How can we find out the total fact table size and Dimension table size?2) where can we calculate Cardinalty of Dimension?3) where and how can we create Bitmap and B-Tree index?

Please help me.Ans:-To find the Fact table v/s Dimension tables ratio, you can use-1. ABAP Program- SAP_INFOCUBE_DESIGNS2. T-Code- RSRV3. T-Code-Listschema- Here you will see the all the tables involved in InfoCube. From here, you can manually find the no. of records in Fact table and Dimension tables associated with the Fact table (E and F tables)Indices are used to locate needed records in a data-base table quickly.BW uses two types of indices, B-tree indices for regular database tables and bitmap indices for fact tables and aggregates tables.

B-tree indices: B-tree indices are used when dealing with huge volume of data. Line Item dimensions use B-tree indices.if u use the B-Tree indices there is no need for deletion of indices as B-tree can able to handle it very well.

Page 44: BI Interview Questions Ver1.0

Bitmap indices: for fact tables and aggregate tables: This is for regular database tables. Bitmap Indices can dramatically improve query performance when table columns contain few distinct values. Except LID other dimensions use bitmap indicesBitmap indices cannot able to handle any inserts, updates, deletion. Bcoz of that we have to delete the indices before loading and have to rebuild it after loading.

90. What are the joins available? What is the temporal join?An:-

Inner join: A record can only be in the selected result set if there are entries in both joined tablesExample master data is coming for 0material and one more cube getting transactional data with material now if u go with inner join u can get the only common (intersection function works) records will come , which are there in 0material mastr and in cube.

Left outer join: If there is no corresponding record in the right table, the record is part of the result setExample : Left Outer Join on DSO 1, you will be able to see all the common Customer IDs will all fields populated + The Customer IDs that are there only in DSO 1, with only the fields available in DSO 1 populated

(fields belonging to the right table have initial values) Temporal join: A join is called temporal if at least one member is time-

dependent.

For example, a join contains the following time-dependent InfoObjects (in addition to other objects that are not time-dependent).

InfoObjects in the join Valid from Valid to

Cost center (0COSTCENTER)

01.01.2001 31.05.2001

Profit center (0PROFIT_CTR)

01.03.2001 31.07.2001

Where the two time-intervals overlap, meaning the validity area that the InfoObjects have in common, is known as the valid time-interval of the temporal join.

Temporal join Valid from Valid to

Valid time-interval 01.03.2001 31.05.2001

Self join: The same object is joined together

91. Data source migration T-code?Ans :- RSDS

92. Where is 0recordmode Infoobject used?Ans:- It is used in delta management. ODS uses 0recordmode info object for delta load. 0recordmode has values as X, D, R. In delta x means rows to be skipped D & R for delta & remove of rows.

93. What steps we do after initialization happened in LO extraction to pick delta.

Page 45: BI Interview Questions Ver1.0

Ans:-After Init without data transfer, you need to run V3 job (If data is there RSA7 then load delta and after that set V3 job) then create Infopackage in BW and select Delta and run.Infopackage doesn't know delta so you need to select it. You can run manually not a problem.

98. Why we delete the setup tables (LBWG) & fill them (OLI*BW)?

Ans:- Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r changing the extract structure right, that means there are some newly added fields in that which r not before. So to get the required data ( i.e.; the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables.

To refresh the statistical data. The extraction set up reads the dataset that you want to process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the data. The data is stored in cluster tables from where it is read when the initialization is run. It is important that during initialization phase, no one generates or modifies application data, at least until the tables can be set up.

LO Extraction Steps:1. Go to Transaction LBWE (LO Customizing Cockpit) Select Logistics Application    SD Sales BW   Extract Structures2. Select the desired Extract Structure and deactivate it first3. Give the Transport Request number and continue

4. Click on `Maintenance' to maintain such Extract Structure  Select the fields of your choice and continue Maintain DataSource if needed

5. Activate the extract structure

6. Give the Transport Request number and continue

7. Delete the content of Setup tables (T-Code LBWG)

8. Filling the Setup tables -SD Sales Orders – Perform Setup (T-Code OLI7BW)

9. Check the data in Setup tables at RSA3

10. Replicate the DataSource

11. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update

12. Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target

Page 46: BI Interview Questions Ver1.0

13. Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7

14. Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button.

15. Now you can go to your data target and see the delta record

101. How to link fields from cube to R/3 table/field?

Ans:-

If it is LO dataSources you can see it in LBWE and click on Maintain and see the Table-Field names.

Check in Following Tables.

DataSource (= OLTP Source) - TableROOSOURCE Header Table for SAP BW DataSources (SAP Source System/BW System)RODELTAM BW Delta Procedure (SAP Source System)RSOLTPSOURCE Replication Table for DataSources in BW

Mapping - TablesRSISOSMAP Mapping Between InfoSources and DataSources (=OLTP Sources)RSOSFIELDMAP Mapping Between DataSource Fields and InfoObjects

96. What are the tables are available in RSA7?Ans:-In RSA7 there are three tables 1.TRFCQOUT (Sender Reciever, Delta Queue Name, user)2.ARFCDATA (LUW's)3.ARFCSTAT (Link b/w above two tables)But It's not possible to read. if you want read use function module RSC2_QOUT_READ_DATA.97. Can we create hierarchies data source by using generics?

Yes we can

Generally, hierarchy sets are created (ECC) using tcode GS01. For this, DataSource created using tcode BW07.

In BW07 give Set leaf table name as table for set field name is whatever you defined for set. And give hierarchy name for User's self-defined datasource name

once you created datasources replicate it in BW and use it as normal

Page 47: BI Interview Questions Ver1.0

datasources. In data selection of infopackage of Datasource you have to select your hierarchy from the list.

98. How does generic delta works?

100. Generic steps:

We opt for generic extraction whenever the desired datasource is not available in business content or if it is already used and we need to regenerate it . when u want to extract the data from table, view, infoset, function module we use generic extraction. In generic we create our own datasource and activate it .

Steps: 1.Tcode is RSO2 2.Give the datasource name and designate it to a particular application component.  3. Give the short medium and long desc mandatory.  4. Give the table / Fm /view /infoset.  5. and continue which leads to a detail screen in which you can always hide, select ,inversion , field only options are available.

HIDE is used to hide the fields .. it will not transfer the data from r/3 to BW SELECT -- the fields are available in the selection screen of the info package while u schedule it . INVERSION is for key figs which will operate with '-1' and nullify the value .

Once the datasource is generated you can extract the data using it .

And now to speak abt the delta ... we have .. 0calday , Numeric pointer , time stamp.

0calday -- is to be run only once a day that to at the end of the day and with a range of 5 min. Numeric Pointer -- is to be used for the tables where it allows only appending of records ana no change .. eg: CATSDB HRtime managent table . Timestamp: using this you can always delta as many times as possible with a upeprlimit .

Whenever there 1:1 relation you use the view and 1:m you use FM.

101. Generic delta datasource with numeric pointer?

Ans:- Delta in Generic extraction is based on Timestamp, Calday, and Numeric pointer. If you do not find Timestamp/calday then we can use Numeric pointer, again it depends on the requirement. Sequentially incrementing field like Sales document number is a good example for

Page 48: BI Interview Questions Ver1.0

numeric pointer. Lets say 1000 is the staring number, 1001 , 1002 , changes will be collected, as a delta .

Also, Numeric pointer allows only newly added records but not changed records.

Check this one

1. scenario in which we use Numeric Pointer option in Generic Delta ?Ans- whenever you require delta based on some fields other than time reference you can use it .ex- employee id, gl account etc..

2.Is there any specific types for Delta-specific field when we use Numeric Pointer ?

I dont think so, you can use it for any filed.

3.when Numeric Pointer is used why we go for safety interval ?

safety inteval is to avoid any loss of delta, ie its intense is to not to miss any record because the last numeric pointer might not loaded the last changed records.

. Numeric Pointer picks only added records

. It only picks increasing records only.

. and if u modify any record. the numeric pointer will not pick them.

These standard extractors use ALE pointers to enable Delta. You will need to check what are the fields that are enabled for delta. Only if there is a change in those fields will it take effect as delta. Go to RSO2 in the source system. Display the datasource. Go to Menu -> Datsource -> ALE Delta .

Check what is the table name and the change document object and the status of the fields of the table if these are enabled for the change document to get triggered.

102. WHAT IS THE DIFFERENCE BETWEEN LO & FI DATASOURCES? AND WHAT ARE THE DIFFERENCE BETWEEN ABOUT DELTA?Ans:- Lo extarctors basically works on Queue management but the the FI extarctors works on Time stamp.

103. What is the importance of key date?Ans:-

Key DateEvery query has a key date. For time-dependent data, the key date determines the time for which the data is selected. The default value for the key date is the date on which the query is executed, that is <today>.

...

Page 49: BI Interview Questions Ver1.0

       1.      Choose . The Select Values for Date dialog box appears.       2.      Choose a date from the calendar. If you select 01.01.1999, for

example, time-dependent data for 01.01.1999 is read.       3.      Choose OK.

You can also select a variable key date:...

       1.      From the context menu you access using the black arrow next to

the icon, choose Entry of Variables.       2.      Select a variable.

For the selection, you may need to know the technical names of the variables as well as their descriptions. Choose Display Key/Text to show the key.

If you want to create a new variable, choose New Variable. The new variable is displayed. The variable editor appears and you can create a new variable.

If you want to change a variable, select the variable and choose Change Variable. The variable editor appears and you can change the variable.

       3.      Choose OK.

The key date only applies to time-dependent master data.

Page 50: BI Interview Questions Ver1.0

104.What is the difference between RKF and CKF?

Restricted keyfigure is to create a keyfigure values based on some value of Charecteristic.Ex. No of employees in south region..

Calculated keyfigure own formulas can be created based on existing keyfigure with the available functions at formula or Caculated keyfigures. percentage fuctions, mathe matical functions, boolean fucntions are available and other functions too.

Once you save this Restricted and Calculated keyfigures they are saved globally and available in all quries on that infoprovider.

New formula is similar function to Calculated keyfigure, but that is local formula.New Selection is also local and similar to Restricted keyfigure.

105.What is the diff between CKF and free chars?A calculated key figure is a formula consists of basic, restricted, other Calculated key figures available in the info provider stored in the Metadata repository for reuse in multiple queries.

Calculated key figures are defines at both:

1. Query level - The calculated key figure is valid only for the query in question.

2. InfoProvider level - The calculated key figure is used in all queries that are based on the same InfoProvider. With free characteristics the user will be able to navigate i.e. including drill down, drill across and the data will be displayed by the means by navigational steps.

106.What things you will keep in mind before designing reports?

1. Keep it simple and short! Shorter is better and for performance monitoring a one page report is ideal. Don't create complex, hard to understand reports.2. Charts and graphs should add value and convey a single message. Just because it is easy to create charts doesn't mean you need them. It is true that a chart often conveys a lot of of information in a decision compelling way, but a chart needs annotation, descriptive title and labels. 3. Don't overload the user with too many numbers. Pages of detailed data is not a report. Some managers want all of the detail in a report. That's

Page 51: BI Interview Questions Ver1.0

usually because they don't trust the accuracy of the summarized data. After some period of time, the manager may gain trust in the data and request more summarization. Then a real report can be designed.4. When possible use color, shading or graphics like arrows to highlight key findings, discrepancies or major indicators. In a paper or web-based report you cannot rely on color alone to convey differences in charts or tables.5. Create and follow report design guidelines. Educate people who create reports about effective reporting.6. Talk to managers/users. Discuss report design with the person who requests the report and be willing to help end users who are creating ad hoc reports.7. Make the context of the report obvious to anyone who sees the report. Use a text box at the beginning of the report to quickly state reporting objectives, authorization, important background facts, and limitations of the data. Always include in the header the date and title of the report. Use page numbers and specify restrictions on distribution and confidentiality.8. Make a plan. Sketch tables and charts and plan the order of information that is included. Decide what data to put in each report section and decide how to arrange the detail data. Make decisions about titles, headings, and data formats.

107.Have you used any time text variables?Ans:-One of the Scenarios we have used text variable is Age Wise (Bucket Report)

1)The User will input the Dates as input variable and that should be displayed in the Columns for that KF.We define a RKF for this where the Date-Variable and Quantity are used.User Input date 1)1 April-5 April2)6 April-10 April3)11April-15 April

So the report will be like

Quantity (1 April-5 April) | Quantity (6 April-10 April)|Quantity (11 April-15 April)

So here we have defined the Date as Text Variables such that we displayed the input Dates in the Report Column heading.

108.What are the types of variable are there?Ans:- 1. Variables for characteristic vales

Page 52: BI Interview Questions Ver1.0

2. Variables for formals3. Variables for Hierarchies and Hierarchy nodes4. Variables for Text

109.What are the variable processing types?Ans:-

User entry/default values Replacement path

SAP Exit Customer exit

Authorization object

110. How do you broadcast reports to portal?Ans:-

Using javaStack we will connect BI reports to portal

111.How do you create global structures?Ans:- Query designer --> right click on rows or columns --> create structure --> Add your elements in that structure and save the structrue with a technical name. So it would be global to that Multiprovider/ Infoprovider.

Create a Structure and save as by giving technical name and description,

in Query Designer top left side you can see a folder name as Structure, expand it

structure are two types,1) Structure with key figures2) Structure without key figures

in your created structure if you have key figures, then expand the Folder (structure with Keyfigures); you can just drag the structure to columns/rows like any other objects. these structures will be available and valid for all the queries which you create on the same infoProvider.

Note: what ever the changes you made to these structures will effect globally; if you want to make any changes to this structure for only one query without effecting globally than, in the query panel remove the reference by right click-->remove reference, now you can use this structure locally.

Another advantage is that you have flexible use of restricted and calculated key figures, including variables and formula variables.

Page 53: BI Interview Questions Ver1.0

If you are using BEx Queries as the basis for a universe, the structure elements will be converted into objects available in the universe.

Disadvantages of structures are that they are not dynamic - you need other means (heirarchy variables) in reports that have varying numbers of rows / columns.

Defining local formulae in a structure prevents reuse of that formula as a standalone CKF / RKF in another query.

Local formulae defined in a structure do not always aggregate how you would want them to, so sometimes the best option available is to create CKF on the InfoProvider, rather than the structure.

112. How may structures we can create in reports? How many default structures will be created?Ans:-

We can create maximum two structures one is characteristic structures and only one can be a key figure structure allowed in a BEx Report. By default one structure created.

113.What is formula collision?Ans:-

Formula collision means conflict of formulae. In reporting, when there are two different formulae in rows (say: Summation) and column (say: Multiplication) structures, formula collision comes into picture.

In query definition, we have to give which formula we have to take into consideration for the cell in which formula collision occurs.

(Or)

Formula Collision

The Formula Collision function is offered ONLY in the formulas property window.

When you define two structures, which both contain formulas, it is unclear to the system how to calculate the formulas at the point where both formulas intersect.

The following example clarifies the concept of formula collision:

  Column 1 Column 2 Column 1 x Column 2

Row 1 Value A Value B A x B

Row 2 Value C Value D C x D

Page 54: BI Interview Questions Ver1.0

Row 1 + Row 2 A + C B + D ? Formula Collision?

In this example, there are two rows and two columns with simple values, the third row is a simple summation formula and the third column is a simple multiplication. In the cell in which the row and column formulas meet, it is not clear which calculation should be made.If you calculate according to the column formula in this cell, the cell contains (A+C)x(B+D). If you calculate according to the rows formula in this cell, the cell contains (AxB)+(CxD). The result gives a different value.

If a formula collision occurs, as described in the example above, you can determine which formula is used in the calculation. You can make the following settings in the Formula Collision field:

·        Nothing definedIf you do not make a definition, the formula that was set last takes priority in a formula collision. Setting means that you defined and saved the formula.

·        Result of this formulaThe result of this formula has priority in a collision

·        Result of competing formulaThe result of a competing formula has priority in a collision

Collisions always occur when point and dash calculations or functions are mixed in competing formulas. If there is only dash calculation or point calculation in both formulas, both calculation directions give the same result. Therefore, not settings are required for formula collision.

114.I want to create one formula that should be valid for that query. How do you create?Ans:-

Please create a local formula and not a CKF.

1) Formula is local and calculated Keyfigure is global.

When u create a formula,it is available for only for that Query.Calculated KeyFigure is applicable across all Queries on same InfoProvider

2)Fomula doesnt have Tech name where as CKF must have.

3) All the formula functions from formula builder will be available while creating by formula where as only few of the formula functions are available for CKF.

4) Formula is created in Query designer Rows/columns where as CKF is created in the left side panel of Query designer (from the context menu of Calculated Keyfigures tree).

Page 55: BI Interview Questions Ver1.0

115.Can we create variables while defining exceptions are conditions?Ans:-

You can also use formula variables as the reference value of the condition

116. Delete a BEx query that is in Production system through request.

Ans:- Using the RSZDELETE transaction

117. What are BW Statistics and what is its use?Ans:-

They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.

118.How to place the company logo in Reports?

Ans:-Import the image in SE80 Mime Repository under

/sap/bw/Customer/Images. Once done, assuming you are using WAD for your reporting right click on the template & choose Insert -> Image. From the pop-up dialog select the image imported thru' SE80.

119.Sales documents and Billing documents should appear in the same line in the report, how do you model? Ans:-Creating sales Order cube and Billing Cube and top of them multiprovider will generate multiple records you will not get result in one line. The reason is you can't identify the billing doc num from sales cube, so it forms another line the report.

To achieve this scenario, you have to consolidate both sales and billing DSO into one. Which will have the key of sales doc no and Billing doc.

First you load all billing data from billing DSO to consolidation one, it is direct mapping, you get sales doc number also in the source, all fields are mapped directly. Next while loading from sales to billing, write an end routine to read the billing documents for the respective sales documents and write to consolidation. In the consolidation you get both data. You can report on this you get data in single record.

Other options are using infoset and using "constant selection” option. But mostly dependant on the business need.(or)We can develop 2 business DSOs each for Sales and BIlling (Using VAITM and VDITM data sources). Above that we can make a cube that takes all data from billing DSO and populate the corresponding sales order from Sales DSO using look up.

120.I would like to display the date the data was uploaded on the report. Usually, we load the transactional data nightly. Is there any easy

Page 56: BI Interview Questions Ver1.0

way to include this information on the report for users? So that they know the validity of the report.

Ans:- If I understand your requirement correctly, you want to display the date on which data was loaded into the data target from which the report is being executed. If it is so, configure your workbook to display the text elements in the report. This displays the relevance of data field, which is the date on which the data load has taken place.

121. Different types of Attributes?

Ans: - Navigational attribute, Display attributes, Time dependent attributes, Compounding attributes, Transitive attributes, Currency attributes.

122.Transitive Attributes?

Ans:- Navigational attributes having nav attr…these nav attrs are called transitive attrs

HOW MANY CALICULATED KEY FIGURES CAN WE CREATE IN ONE REPORT?No limit, you can go for N number...

CAN WE CREATE VARIABLE ON DISPLAY ATTRIBUTE?You Can't, untile and unl;ess it is Navigational Attr.

123.What is istep=1&2&3,?

Ans:- UseThe variable exit may not be executed or false data may be selected when executing a query that contains variables with the replacement path Customer Exit, filled dependently of the entry-ready variables. As a preemptive measure, you can control the dependencies with the parameter I_STEP

FeaturesThe enhancement RSR00001 (BW: Enhancements for Global Variables in Reporting) is called up several times during execution of the report. Here, the parameter I_STEP specifies when the enhancement is called.

The following values are valid for I_STEP:

· I_STEP = 1

Call takes place directly before variable entrybefore the variable screen pop up the code will be executed once. after that only the variable screen will be displayedFor ex: if we want to populate constant values in the code we will use this variable

· I_STEP = 2

Call takes place directly after variable entry. This step is only started up when the same variable is not input ready and could not be filled at I_STEP=1.

Page 57: BI Interview Questions Ver1.0

When we enter the values in the variable screen the code will be executed for ex: if we want to calculate any value based on the values which we give on the variable screen we will use this variable

· I_STEP = 3

In this call, you can check the values of the variables. Triggering an exception (RAISE) causes the variable screen to appear once more. Afterwards, I_STEP=2 is also called again.-- After entering the values in the variable screen and when we click on the execute button then the code will be executedfor ex: for data validation we will use this variable

· I_STEP = 0

The enhancement is not called from the variable screen. The call can come from the authorization check or from the Monitor.

Values of other Variables

When calling the enhancement RSR00001 (BW: Enhancements for Global Variables in Reporting), the system transfers the currently available values of the other variables in table I_T_VAR_RANGE. The table type is RRS0_T_VAR_RANGE and the row type RRS0_S_VAR_RANGE references to the structure RRRANGEEXIT.

This structure has the following fields:

FieldDescription

VNAMVariable name

IOBJNMInfoObject name

SIGN(I)ncluding [ [ or (E)xcluding [ [

OPTOperators: EQ =, BT [ [, LE [[, LT [, GE [[, GT [, CP and so on.

LOWCharacteristic value

HIGHUpper limit characteristic value for intervals/the node-InfoObject for hierarchy nodes.

ActivitiesA variable that is to be filled dependently of an entry-ready variable must never be filled in step I_STEP=1. Now you are at a position prior to the variable entry. Values have not yet been entered for the input-ready variables.

Page 58: BI Interview Questions Ver1.0

You can insert the following statements to force the variable to be executed with I_STEP=2 and not I_STEP=1.

CASE I_VNAM.…………..

IF I_STEP 2.

RAISE no_processing.

ENDIF.

124.Is aggregate DB index is required for query performance? Ans:-Yes it improves the reports performance..

Using the Check Indexes button, you can check whether indexes already exist and whether these existing indexes are of the correct type (bitmap indexes).Yellow status display: There are indexes of the wrong typeRed status display: No indexes exist, or one or more indexes are faultyYou can also list missing indexes using transaction DB02, pushbutton Missing Indexes. If a lot of indexes are missing, it can be useful to run the ABAP reports SAP_UPDATE_DBDIFF and SAP_INFOCUBE_INDEXES_REPAIR.

125.Where is 0recordmode Infoobject used?Ans:- It is used in delta management. ODS uses 0recordmode info object for delta load. 0recordmode has values as X, D, R. In delta x means rows to be skipped D & R for delta & remove of rows.

126. Why do you create Aggregates? What is the Valuation type in aggregates? There are some ++, minus in that screen?Ans: - Aggregates: These are minicubes...means.When you execute a query on the cube (as you know cube will have very huge data), the query will have to search the entire cube for the information required by the query...so it'll take long times in reporting.

In order to get better reporting performance...aggregates are one way...here we will aggregate data in the cube as per requirements of the query, which the query don't have to search the entire database. The required data will be maintained as smaller cubes. Which will take less time for query execution as the query execution will have to search limited data base.

We can aggregate as MAX,MIN etc...

After creation of aggregates they should be made active.So when a query executes. First it will search the aggregates for the related data, once the data finds in the aggregates the query will be executed in less time...

Page 59: BI Interview Questions Ver1.0

SAP valuates how well your aggregates are used. There are some cases when you create 3 aggregates on a Cube, but most of the queries are run on the first aggregate.So the remaining 2 aggregates are a waste of space as well the waste of time to fill these aggregates.

There may be a case when a aggregate is just a subset of a larger aggregate, which has less significance for reports, In such cases you will have minus sign.

It’s a SAP way of saying that your aggregates are either not properly built or not used. If the sign is -ve you need to check what's going wrong.

127) The FI Business Flow related to BW. case studies or scenariosFI FlowBasically there are 5 major topics/areas in FI,1. GL Accounting -related tables are SKA1, SKB1 Master DataBSIS and BSAS are the Transaction Data2. Account Receivables- related to CustomerAll the SD related data when transfered to FI these are created. Related Tables BSID and BSAD3. Account Payables - related VendorAll the MM related documents data when transfered to FI these are createdRelated Tables BSIK and BSAKAll the above six tables data is present in BKPF and BSEG tablesYou can link these tables with the hlp of BELNR and GJAHR and with Dates also.4. Special Purpose Ledger.. which is rarely used.5. Asset ManagmentIn CO there are Profit center AccountingCost center Accounting will be there

128) How to create condition and exceptions in Bi.7.0? But I know in Bw3.5 version.?From a query name or description, you would not be able to judge whether the query is having any exception. There are two ways of finding exception against a query:1. Execute queries one by one, the one which is having background colour as exception reporting are with exceptions.2. Open queries in the BEX Query Designer. If you are finding exception tab at the right side of filter and rows/column tab, the query is having exception.

129) what is the difference between filter & Restricted Key Figures? Examples & Steps in BI?Filter restriction applies to entire query. RKF is restriction applied on a keyfigure.Suppose for example, you want to analyze data only after 2006...showing sales in 2007, 2008 against Materials..You have got a key figure called Sales in your cubeNow you will put global restriction at query level by putting Fiscyear > 2006 in the Filter. This will make only data which have fiscyear >2006 available for query to process or show.Now to meet your requirement. ..like belowMaterial Sales in 2007 Sales in 2008 M1 200 300M2 400 700You need to create two RKF's.Sales in 2007 is one RKF which is defined on keyfigure Sales restricted by Fiscyear = 2007Similarly,Sales in 2008 is one RKF which is defined on Keyfigure Sales restricted by Fiscyear = 2008Now i think u understood the differenceFilter will make the restriction on query level..Like in above case putting filter Fiscyear>2006 willmake data from cube for yeaers 2001,2002,2003, 2004,2005 ,2006 unavailable to the query for showing up.So query

Page 60: BI Interview Questions Ver1.0

is only left with data to be shown from 2007 and 2008.Within that data.....you can design your RKF to show only 2007 or something like that...

130) what is the use of Define cell in BeX & where it is useful?Cell in BEX: Use * When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell. Cell-specific definitions allow you to define explicit formulas, along with implicit cell definition, and selection conditions for cells and in this way, to override implicitly created cell values. This function allows you to design much more detailed queries. In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas. you need two structures to enable cell editor in bex. In every query you have one structure for key figures, then you have to do another structure with selections or formulas inside. Then having two structures, the cross among them results in a fix reporting area of n rows * m columns. The cross of any row with any column can be defined as formula in cell editor. This is useful when you want to any cell had a different behavior that the general one described in your query defininion.For example imagine you have the following where % is a formula kfB/KfA *100.kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 50%Then you want that % for row chC was the sum of % for char and % chB. Then in cell editor you are enable to write a formula specifically for that cell as sum of the two cell before. chC/% = chA/% + chB/% then:kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 86%Manager Round Review Questions.

131) Production supportIn production support there will be two kind jobs which you will be doing mostly 1, looking into the data load errors. 2, solving the tickets raised by the user. Data loading involves monitoring process chains, solving the errors related to data load, other than this you will also be doing some enhancements to the present cubes and master data but that done on requirement.

User will raise a ticket when they face any problem with the query, like report showing wrong values incorrect data etc.if the system response is slow or if the queries run time is high. Normally the production support activities include * Scheduling * R/3 Job Monitoring * B/W Job Monitoring * Taking corrective action for failed data loads. * Working on some tickets with small changes in reports or in AWB objects. The activities in a typical Production Support would be as follows: 1.Data Loading - could be using process chains or manual loads. 2. Resolving urgent user issues - helpline activities 3. Modifying BW reports as per the need of the user. 4. Creating aggregates in Prod system 5. Regression testing when version/patch upgrade is done. 6. Creating adhoc hierarchies. We can perform the daily activities in Production 1. Monitoring Dataload failures thru RSMO 2. Monitoring Process Chains Daily/weekly/ monthly3. Perform Change run Hirerachy 4. Check Aggr's Rollup.

Page 61: BI Interview Questions Ver1.0

132) An SAP BW functional consultant is responsible for the following: Key responsibilities includeMaintain project plans Manage all project activities, many of which are executed by resources not directly managed by the project leader (central BW development team, source system developer, business key users) Liase with key users to agree reporting requirements, report designs Translate requirements into design specifications( report specs, data mapping / translation, functional specs) Write and execute test plans and scripts .Coordinate and manage business / user testing Deliver training to key users Coordinate and manage product ionization and rollout activities Track CIP (continuous improvement) requests, work with users to prioritize, plan and manage CIP An SAP BW technical consultant is responsible for:SAP BW extraction using standard data extractor and available development tools for SAP and non-SAP data sources. -SAP ABAP programming with BWData modeling, star schema, master data, ODS and cube design in BWData loading process and procedures (performance tuning)Query and report development using Bex Analyzer and Query DesignerWeb report development using Web Application.

133) Give me one example of a Functional Specification and explain what information we will get from that?Functional Specs are requirements of the business user. Technical Specs translate these requirements in a technical fashion. Let’s say Functional Spec says,1. the user should be able to enter the Key date, Fiscal Year, Fiscal Version.2. The Company variable should be defaulted to USA but then if the user wants to change it, they can check the drop down list and choose other countries.3. The calculations or formulas for the report will be displayed in precision of one decimal point.4. The report should return values for 12 months of data depending on the fiscal year that the user enters Or it should display in quarterly values. Functional specs are also called as Software requirements. Now from this Technical Spec follows, to resolve each of the line items listed above.1. To give the option of key date, Fiscal year and Fiscal Version – certain Info Objects should be available in the system. If available, then should we create any variables for them - so that they are used as user entry variable. To create any variables, what is the approach, where do you do it, what is the technical of the objects you'll use, what'll be the technical name of the objects you'll create as a result of this report.2. Same explanation goes for the rest. How do you set up the variable,3. What changes in properties will do to get the precision.4. How will you get the 12 months of data. What will be the technical and display name of the report, who'll be authorized to run this report, etc are clearly specified in the technical specs.

134) who used to make the Technical and Functional Specifications?

Technical Specification: Here we will mention all the BW objects (info objects, data sources, info sources and info providers). Then we are going to say the data flow and behavior of the data load (either delta or full) also we can tell the duration of the cube activation or creation. Pure BW technical things are available in this document. This is not for End users document.

Functional Specification: Here we will describe the business requirements. That means here we are going to say which are all business we are implementing like SD, MM and FI etc., then we are going to tell the KPI and deliverable reports detail to the users. This document is going to mingle with both Function Consultants and Business Users. This document is applicable for end users also.

Page 62: BI Interview Questions Ver1.0

135) How do we decide what cubes has to be created?

Its depends on your project requirement. Customized cubes are not mandatory for all the projects. If your business requirement is differs from given scenario (BI content cubes) then only we will opt for customized cubes. Normally your BW customization or creation of new info providers all are depending on your source system. If your source system other that R3 then you should go with customization of your all objects. If your source system is R3 and your users are using only R3 standard business scenarios like SD, MM or FI... etc., then you don’t want to create any info providers or you don’t want to enhance any thing in the existing BW Business Content. But 99% this is not possible. Because surely they should have included their new business scenario or new enhancements. For example, In my first project we implemented for Solution Manager BW implementation. There we have activated all the business content in CRM. But the source system has new scenarios for message escalation, ageing calculation etc., According their business scenario we couldn’t use standard business content. For that we have taken only existing info objects and created new info objects which are not there in the business content. After that we have created custom data source to info providers as well as reports.

136) How do we gather the requirements for an Implementation Project?

One of the biggest and most important challenges in any implementation is gathering and understanding the end user and process team functional requirements. These functional requirements represent the scope of analysis needs and expectations (both now and in the future) of the end user. These typically involve all of the following:- Business reasons for the project and business questions answered by the implementation- Critical success factors for the implementation- Source systems that are involved and the scope of information needed from each- Intended audience and stakeholders and their analysis needs- Any major transformation that is needed in order to provide the information- Security requirements to prevent unauthorized use This process involves one seemingly simple task: Find out exactly what the end users' analysis requirements are, both now and in the future, and build the BW system to these requirements. Although simple in concept, in practice gathering and reaching a clear understanding and agreement on a complete set of BW functional requirements is not always so simple.

137) What we do in Business Blue Print Stage?SAP has defined a business blueprint phase to help extract pertinent information about your company that is necessary for implementation. These blueprints are in the form of questionnaires that are designed to probe for information that uncovers how your company does business. As such, they also serve to document the implementation. Each business blueprint document essentially outlines your future business processes and business requirements. The kinds of questions asked are germane to the particular business function, as seen in the following sample questions:1) What information do you capture on a purchase order?2) What information is required to complete a purchase order? Accelerated SAP question and answer database: The question and answer database (QADB) is a simple although aging tool designed to facilitate the creation and maintenance of your business blueprint. This database stores the questions and the answers and serves as the heart of your blue print. Customers are provided with a customer input template for each application that collects the data. The question and answer format is standard across applications to facilitate easier use by the project team. Issues database: Another tool used in the blueprinting phase is the issues database. This database

Page 63: BI Interview Questions Ver1.0

stores any open concerns and pending issues that relate to the implementation. Centrally storing this information assists in gathering and then managing issues to resolution, so that important matters do not fall through the cracks. You can then track the issues in database, assign them to teammembers, and update the database accordingly.

138) what is dashboard?A dash board can be created using the web application Designer (WAD) or the visual composer (VC). A dashboard is just a collection of reports, views and links etc in a single view. For e.g. igoogle is a dashboard.

A dashboard is a graphical reporting interface, which displays KPIs (Key Performance Indicators) as charts and graphs. A dashboard is a performance management system

When we look at the all organization measures how they are performing with helicopter view, we need a report that teaches and shows the trend in a graphical display quickly. These reports are called as Dashboard Reports, still we can report these measures individually, but by keeping all measures in a single page, we are creating single access point to the users to view all information available to them. Absolutely this will save lot of precious time, gives clarity on decision that needs to be taken, helps the users to understand the measure(s) trend with business flow creating dashboardDashboards : Could be built with Visual Composer & WADcreate your dashboard in BW,

(1) Create all BEx Queries with required variants, tune them perfectly.(2) Differentiate table queries and graph queries.(3) Choose the graph type required that meet your requirement.(4) Draw the layout how the Dashboard page looks like.(5) Create a web template that has navigational block / selection information.(6) Keep navigational block fields are common across the measures.(7) Include the relevant web items into web template.(8) Deploy the URL/Iview to users through portal/intranet

The steps to be followed in the creation of Dashboard using WAD are summarized as below:

1) Open a New Web template in WAD.2) Define the tabular layout as per the requirements so as to embed the necessary web items.3) Place the appropriate web items in the appropriate tabular grids4) Assign queries to the web items (A Query assigned to a web item is called as a data provider)5) Care should be taken to ensure that the navigation block’s selection parameters are common across all the BEx queries of the affected data providers.6) Properties of the individual web items are to be set as per the requirements. They can be modified in Properties window or in the HTML code.7) The URL when this web template is executed should be used in the portal/intranet

139) Tell me web template?You get information on where the web template details are stored from the following tables :RSZWOBJ Storage of the Web ObjectsRSZWOBJTXT Texts for Templates/Items/ ViewsRSZWOBJXREF Structure of the BW Objects in a Template

Page 64: BI Interview Questions Ver1.0

RSZWTEMPLATE Header Table for BW HTML TemplatesYou can check these tables and search for your web template entry . However, If I understand your question correctly, you will have to open the template in the WAD and then make the corrections in the same to correct it.

140) Why we have construct setup tables?The R/3 database structure for accounting is much more easier than the Logistical structure.Once you post in a ledger that is done. You can correct, but that give just another posting.BI can get information direct out of this (relatively) simple database structure.In LO, you can have an order with multiple deliveries to more than one delivery addresses. And the payer can also be different.When 1 item (orderline) changes, this can have its reflection on order, supply, delivery, invoice, etc.Therefore a special record structure is build for Logistical reports. and this structure now is used for BI.In order to have this special structure filled with your starting position, you must run a set-up. From that moment on R/3 will keep filling this LO-database.If you wouldn't run the setup. BI would start with data from the moment you start the filling of LO (with the logistical cockpit)

141) What is statistical setup and what is the need and why?

Follow these steps to filling the set up table.

1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.2. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application)3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables.4. Go to transaction RSA3 and check the data.5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update.6. Go to BW system and create info package and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target.7. Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7.8. Go to BW system and create a new info package for delta loads. Double click on new info package. Under update tab you can see the delta update radio button..9. Now you can go to your data target and see the delta record.

142) How can you decide the query performance is slow or fast?You can check that in RSRT tcode.execute the query in RSRT and after that follow the below stepsGoto SE16 and in the resulting screen give table name as RSDDSTAT for BW 3.x and

Page 65: BI Interview Questions Ver1.0

RSDDSTAT_DM for BI 7.0 and press enter you can view all the details about the query like time taken to execute the query and the timestamps

143) Difference between v1, v2, v3 jobs in extraction?

V1 Update: when ever we create a transaction in R/3(e.g.,Sales Order) then the entries get into the R/3 Tables(VBAK, VBAP..) and this takes place in V1 Update.V2 Update: V2 Update starts a few seconds after V1 Update and in this update the values get into Statistical Tables, from where we do the extraction into BW.V3 Update: Its purely for BW extraction.But in the Document below, V1, V2 and V3 are defined in a different way. Can You please explain me in detial what exactly V1, V2 and V3 updates means?

144) what are statistical update and document update?

Synchronous Updating (V1 Update)The statistics update is made synchronously with the document update.While updating, if problems that result in the termination of thestatistics update occur, the original documents are NOT saved. The causeof the termination should be investigated and the problem solved.Subsequently, the documents can be entered again.Radio button: V2 updating

145) How we do the SD and MM configuration for BW?You need to activate the data sources in R3 system.You need to maintain the login information for the logical system.sm59: Choose the RFC destination, BW system, under logon Security, maintain the user credentials.Maintain control parameters for data transfer.Filling in of setup tables SBIWI feel that these are certain prerequisites.From an SD perspective, you as a BW consultant should first understand the basic SD process flow on the R3 side. (Search the forum for SD process flow and you'll get a wealth of information on the flow and the tables as well as transactions involved in SD).Next you need to understand the process flow that has been implemented at the clients place. How the SD data flows and what are the integration points with other modules as well as how the integration happens.This knowledge is essential when modeling your BW design.

From a BW perspective you need to first know all the SD extractors and what information they bring. Next look at all the cubes and ODS for SD.

146)

125) Shall I put CKF in RCKF?126) What is the t-code for RRI?127) What is the t-code for sales order and General ledger creation?128) Can we use CKF in RKF?yes129) Open hub and Info spoke?130)In Info cube & DSO which one is better suited for reporting? Explain and what are the drawbacks of each one?

Page 66: BI Interview Questions Ver1.0

131) Early delta Initialization?132)I have loaded the data using V3unserialized to DSO, what happened in background?133) What is the “Row count”, which info providers it is available?134) In W.O.DSO we have 0RECORD MODE?135) If I put the Key figher in key fields area in DSO what happened?