Project Related Questions
Project Related Questions
Project Related Questions
So what i would request my readers to start posting your answers to this questions in the discussion forum under informatica technical interview guidance tag and ill review them and only valid answers will be kept and rest will be deleted. 1. Explain your Project? 2. What are your Daily routines? 3. How many mapping have you created all together in your project? 4. In which account does your Project Fall? 5. What is your Reporting Hierarchy? 6. How many Complex Mappings have you created? Could you please tell me the situation for which you have developed that Complex mapping? 7. What is your Involvement in Performance tuning of your Project? 8. What is the Schema of your Project? And why did you opt for that particular schema? 9. What are your Roles in this project? 10. Can I have one situation which you have adopted by which performance has improved dramatically? 11. Where you Involved in more than two projects simultaneously? 12. Do you have any experience in the Production support? 13. What kinds of Testing have you done on your Project (Unit or Integration or System or UAT)? And Enhancements were done after testing? 14. How many Dimension Table are there in your Project and how are they linked to the fact table? 15. How do we do the Fact Load? 16. How did you implement CDC in your project? 17. How does your Mapping in File to Load look like? 18. How does your Mapping in Load to Stage look like? 19. How does your Mapping in Stage to ODS look like? 20. What is the size of your Data warehouse? 21. What is your Daily feed size and weekly feed size? 22. Which Approach (Top down or Bottom Up) was used in building your project? 23. How do you access your sources (are they Flat files or Relational)? 24. Have you developed any Stored Procedure or triggers in this project? How did you use them and in which situation? 25. Did your Project go live? What are the issues that you have faced while moving your project from the Test Environment to the Production Environment? 26. What is the biggest Challenge that you encountered in this project? 27. What is the scheduler tool you have used in this project? How did you schedule jobs using it?
28. Difference between Informatica 7x and 8x? 29. Difference between connected and unconnected lookup transformation in Informatica? 30. Difference between stop and abort in Informatica? 31. Difference between Static and Dynamic caches? 32. What is Persistent Lookup cache? What is its significance? 33. Difference between and reusable transformation and mapplet? 34. How the Informatica server sorts the string values in Rank transformation? 35. Is sorter an active or passive transformation? When do we consider it to be active and passive? 36. Explain about Informatica server Architecture? 37. In update strategy Relational table or flat file which gives us more performance? Why? 38. What are the out put files that the Informatica server creates during running a session? 39. Can you explain what are error tables in Informatica are and how we do error handling in Informatica? 40. Difference between constraint base loading and target load plan? 41. Difference between IIF and DECODE function? 42. How to import oracle sequence into Informatica? 43. What is parameter file? 44. Difference between Normal load and Bulk load? 45. How u will create header and footer in target using Informatica? 46. What are the session parameters? 47. Where does Informatica store rejected data? How do we view them? 48. What is difference between partitioning of relational target and file targets? 49. What are mapping parameters and variables in which situation we can use them? 50. What do you mean by direct loading and Indirect loading in session properties? 51. How do we implement recovery strategy while running concurrent batches? 52. Explain the versioning concept in Informatica? 53. Hi readers. These are the questions which normally I would expect by interviewee to know when i sit in panel. So what i would request my readers to start posting your answers to this questions in the discussion forum under informatica technical interview guidance tag and ill review them and only valid answers will be kept and rest will be deleted.
54. What is Data driven? 55. What is batch? Explain the types of the batches? 56. What are the types of meta data repository stores?
57. Can you use the mapping parameters or variables created in one mapping into another mapping? 58. Why did we use stored procedure in our ETL Application? 59. When we can join tables at the Source qualifier itself, why do we go for joiner transformation? 60. What is the default join operation performed by the look up transformation? 61. What is hash table Informatica? 62. In a joiner transformation, you should specify the table with lesser rows as the master table. Why? 63. Difference between Cached lookup and Un-cached lookup? 64. Explain what DTM does when you start a work flow? 65. Explain what Load Manager does when you start a work flow? 66. In a Sequential batch how do i stop one particular session from running? 67. What are the types of the aggregations available in Informatica? 68. How do I create Indexes after the load process is done? 69. How do we improve the performance of the aggregator transformation? 70. What are the different types of the caches available in Informatica? Explain in detail? 71. What is polling? 72. What are the limitations of the joiner transformation? 73. What is Mapplet? 74. What are active and passive transformations? 75. What are the options in the target session of update strategy transformation? 76. What is a code page? Explain the types of the code pages? 77. What do you mean rank cache? 78. How can you delete duplicate rows with out using Dynamic Lookup? Tell me any other ways using lookup delete the duplicate rows?
Hi readers. These are the questions which normally I would expect by interviewee to know when i sit in panel. So what i would request my readers to start posting your answers to this questions in the discussion forum under informatica technical interview guidance tag and ill review them and only valid answers will be kept and rest will be deleted. 51.Can u copy the session in to a different folder or repository? 52.What is tracing level and what are its types? 53.What is a command that used to run a batch? 54.What are the unsupported repository objects for a mapplet? 55.If your workflow is running slow, what is your approach towards performance tuning? 56.What are the types of mapping wizards available in Informatica? 57.After dragging the ports of three sources (Sql server, oracle, Informix) to a single source qualifier, can we map these three ports directly to target? 58.Why we use stored procedure transformation? 59.Which object is required by the debugger to create a valid debug session? 60.Can we use an active transformation after update strategy transformation? 61.Explain how we set the update strategy transformation at the mapping level and at the session level? 62.What is exact use of 'Online' and 'Offline' server connect Options while defining Work flow in Work flow monitor? The system hangs when 'Online' Server connect option. The Informatica is installed on a Personal laptop. 63.What is change data capture? 64.Write a session parameter file which will change the source and targets for every session. i.e different source and targets for each session run ? 65.What are partition points? 66.What are the different threads in DTM process? 67.Can we do ranking on two ports? If yes explain how? 68.What is Transformation?
69.What does stored procedure transformation do in special as compared to other transformation? 70.How do you recognize whether the newly added rows got inserted or updated? 71.What is data cleansing? 72.My flat files size is 400 MB and I want to see the data inside the FF with out opening it? How do I do that? 73.Difference between Filter and Router? 74.How do you handle the decimal places when you are importing the flat file? 75.What is the difference between $ & $$ in mapping or parameter file? In which case they are generally used?
89.What precautions do you need take when you use reusable Sequence generator transformation for concurrent sessions? 90.Is it possible negative increment in Sequence Generator? If yes, how would you accomplish it? 91.Which directory Informatica looks for parameter file and what happens if it is missing when start the session? Does session stop after it starts? 92.Informatica is complaining about the server could not be reached? What steps would you take? 93.You have more five mappings use the same lookup. How can you manage the lookup? 94.What will happen if you copy the mapping from one repository to another repository and if there is no identical source? 95.How can you limit number of running sessions in a workflow? 96.An Aggregate transformation has 4 ports (l sum (col 1), group by col 2, col3), which port should be the output? 97.What is a dynamic lookup and what is the significance of NewLookupRow? How will use them for rejecting duplicate records? 98.If you have more than one pipeline in your mapping how will change the order of load? 99.When you export a workflow from Repository Manager, what does this xml contain? Workflow only? 100. Your session failed and when you try to open a log file, it complains that the session details are not available. How would do trace the error? What log file would you seek for? 101.You want to attach a file as an email attachment from a particular directory using email task in Informatica, How will you do it? 102. You have a requirement to alert you of any long running sessions in your workflow. How can you create a workflow that will send you email for sessions running more than 30 minutes. You can use any method, shell script, procedure or Informatica mapping or workflow control?
4. What is a Star Schema? 5. What is Dimensional Modelling? 6. What Snow Flake Schema? 7. What are the Different methods of loading Dimension tables? 8. What are Aggregate tables?
9. What is the Difference between OLTP and OLAP? 10. What is ETL? 11. What are the various ETL tools in the Market? 12. What are the various Reporting tools in the Market? 13. What is Fact table? 14. What is a dimension table? 15. What is a lookup table? 16. What is a general purpose scheduling tool? Name some of them? 17. What are modeling tools available in the Market? Name some of them? 18. What is real time data-warehousing? 19. What is data mining? 20. What is Normalization? First Normal Form, Second Normal Form , Third Normal Form? 21. What is ODS? 22. What type of Indexing mechanism do we need to use for a typical Data warehouse? 23. Which columns go to the fact table and which columns go the dimension table? (My user needs to see <data element<data element broken by <data element<data element> All elements before broken = Fact Measures All elements after broken = Dimension Elements
24. What is a level of Granularity of a fact table? What does this signify?(Weekly level summarization there is no need to have Invoice Number in the fact table anymore) 25. How are the Dimension tables designed? De-Normalized, Wide, Short, Use Surrogate Keys, Contain Additional date fields and flags. 26. What are slowly changing dimensions? 27. What are non-additive facts? (Inventory,Account balances in bank) 28. What are conformed dimensions? 29. What is VLDB? (Database is too large to back up in a time frame then it's a VLDB)
What are Oracle hints and how do you use them ? Going straight to desktop without having to login Datastage real time scenario Testing with no documentation Informatica batch processing How to protect and show message ? Kill workflows Creating dynamic .Pset What are the difference cost estimation techinques for testcase (i.E estimation effort) How do you measure the injected defects? Real-time lookup transformation usage Date services real time scenarios Time out feature in citrix How do 80386 switches from real mode to protected mode? How to get minimum time stamp from a table ? Locks in Oracle Why it takes more time to distill out water than toluene? What is a readq ts? Abstract usage in c# Partial class
Informatica Designer
Q. How to execute PL/SQL script from Informatica mapping? Q. How can you define a transformation? What are different types of transformations available in Informatica? Q. What is a source qualifier? What is meant by Query Override? Q. What is aggregator transformation? Q. What is Incremental Aggregation? Q. How Union Transformation is used? Q. Can two flat files be joined with Joiner Transformation? Q. What is a look up transformation? Q. Can a lookup be done on Flat Files? Q. What are Connected and Unconnected Lookups? Q. What is a mapplet? Q. What does reusable transformation mean? Q. What is update strategy and what are the options for update strategy?
DataWareHousing - ETL Project Life Cycle ( Simple to understand ) Submitted by shivakrishnas on Tue, 2010-12-28 08:56
Warehousing -> Datawarehousing projects are categorized into 4 types. 1) Development Projects. 2) Enhancement Projects 3) Migration Projects 4) Production support Projects. -> The following are the different phases involved in a ETL project development life cycle. 1) Business Requirement Collection ( BRD ) 2) System Requirement Collection ( SRD ) 3) Design Phase a) High Level Design Document ( HRD ) b) Low level Design Document ( LLD ) c) Mapping Design 4) Code Review 5) Peer Review 6) Testing a) Unit Testing b) System Integration Testing. c) USer Acceptance Testing ( UAT ) 7) Pre - Production 8) Production ( Go-Live ) Business Requirement Collection :---------------------------------------------> The business requirement gathering start by business Analyst, onsite technical lead and client business users. -> In this phase,a Business Analyst prepares Business Requirement Document ( BRD ) (or) Business Requirement Specifications ( BRS ) -> BR collection takes place at client location. -> The o/p from BR Analysis are -> BRS :- Business Analyst will gather the Business Requirement and document in BRS -> SRS :- Senior technical people (or) ETL architect will prepare the SRS which contains s/w and h/w requirements. The SRS will includes a) O/S to be used ( windows or unix ) b) RDBMS required to build database ( oracle, Teradata etc ) c) ETL tools required ( Informatica,Datastage ) d) OLAP tools required ( Cognos ,BO ) The SRS is also called as Technical Requirement Specifications ( TRS ) Designing and Planning the solutions :------------------------------------------------
-> The o/p from design and planning phase is a) HLD ( High Level Design ) Document b)LLD ( Low Level Design ) Document HLD ( High Level Design ) Document : An ETL Architect and DWH Architect participate in designing a solution to build a DWH. An HLD document is prepared based on Business Requirement. LLD ( Low Level Design ) Document : Based on HLD,a senior ETL developer prepare Low Level Design Document The LLD contains more technical details of an ETL System. An LLD contains data flow diagram ( DFD ), details of source and targets of each mapping. An LLD also contains information about full and incremental load. After LLD then Development Phase will start Development Phase ( Coding ) :--------------------------------------------------> Based an LLD, the ETL team will create mapping ( ETL Code ) -> After designing the mappings, the code ( Mappings ) will be reviewed by developers. Code Review :-> Code Review will be done by developer. -> In code review,the developer will review the code and the logic but not the data. -> The following activities takes place in code review -> You have to check the naming standards of transformation,mappings of data etc. -> Source and target mapping ( Placed the correct logic or not in mapping ) Peer Review :-> The code will reviewed by your team member ( third party developer ) Testing:-------------------------------The following various types testing carried out in testing environment. 1) Unit Testing 2) Development Integration Testing 3) System Integration Testing 4) User Acceptance Testing Unit Testing :-> A unit test for the DWH is a white Box testing,It should check the ETL procedure and Mappings. -> The following are the test cases can be executed by an ETL developer. 1) Verify data loss 2) No.of records in the source and target 3) Dataload/Insert 4) Dataload/Update 5) Incremental load 6) Data accuracy 7) verify Naming standards. 8) Verify column Mapping
-> The Unit Test will be carried by ETL developer in development phase. -> ETL developer has to do the data validations also in this phase. Development Integration Testing -> Run all the mappings in the sequence order. -> First Run the source to stage mappings. -> Then run the mappings related to dimensions and facts. System Integration Testing :-> After development phase,we have to move our code to QA environment. -> In this environment,we are giving read-only permission to testing people. -> They will test all the workflows. -> And they will test our code according to their standards. User Acceptance Testing ( UAT ) :-> This test is carried out in the presence of client side technical users to verify the data migration from source to destination. Production Environment :---------------------------------> Migrate the code into the Go-Live environment from test environment ( QA Environment ).
What exactly is the diff between HLD & LLD Reply from Unknown User | posted Aug 4, 2007 | Replies (3) HLD = high level design LLD = low level design depends on what aspect of the project you are talking about, for instance.... for Data Mapping exercise to map source / legacy data into EDW..... the System Architect / Solution Architect makes the HLD and mentions a number of source tables from which a Target table will be filled. table to table mapping is mentioned. e.g Target_Table_1 is filled by SourceSystemA.Source_TableA Target_Table_1 is filled by SourceSystemA.Source_TableB Target_Table_1 is filled by SourceSystemB.Source_TableA and LLD is done by a Data Mapper / Business Analyst who mapes source columns to target columns and mentions transformation rules for this mapping. this was an example of LLD & HLD in the aspect of Business Rules and Data Mapping. similar can be example of ETL Design / Informatica Design..... Technical Architect / Technical Manager puts down first a HLD mentioning naming conventions and technical design of the
project and then later he discusses with ETL Lead and formulates a complete LLD for the project mentioning each and every assumption and procedures to be followed leaving nothing to be assumed on the Developers side.
For people who have been involved in software projects, they will constantly hear the terms, High Level Design (HLD) and Low Level Design (LLD). So what are the differences between these 2 design stages and when are they respectively used ? High level Design gives the overall System Design in terms of Functional Architecture and Database design. It designs the over all architecture of the entire system from main module to all sub module. This is very useful for the developers to understand the flow of the system. In this phase design team, review team (testers) and customers plays a major role. For this the entry criteria are the requirement document that is SRS. And the exit criteria will be HLD, projects standards, the functional design documents,and the database design document. Further, High level deign gives the overview of the development of product. In other words how the program is going to be divided into functions, modules, subdivision etc. Low Level Design (LLD): During the detailed phase, the view of the application developed during the high level design is broken down into modules and programs. Logic design is done for every program and then documented as program specifications. For every program, a unit test plan is created. The entry criteria for this will be the HLD document. And the exit criteria will the program specification and unit test plan (LLD). The Low Level Design Document gives the design of the actual program code which is designed based on the High Level Design Document. It defines Internal logic of corresponding submodule designers are preparing and mapping individual LLDs to Every module. A good Low Level Design Document developed will make the program very easy to be developed by developers because if proper analysis is made and the Low Level Design Document is prepared then the code can be developed by developers directly from Low Level Design Document with minimal effort of debugging and testing.
High Level Design, means precisely that. A high level design discusses an overall view of how something should work and the top level components that will comprise the proposed solution. It should have very little detail on implementation, i.e. no explicit class definitions, and in some cases not even details such as database type (relational or object) and programming language and platform. A low level design has nuts and bolts type detail in it which must come after high level design has been signed off by the users, as the high level design is much easier to change than the low level design.
HLD: It refers to the functionlity to be achieved to meet the client requirement. Precisely speaking it is a diagramatic representation of clients operational systems, staging areas, dwh n datamarts. also how n what frequency the data is extracted n loaded into the target database. LLD: It is prepared for every mapping along with unit test plan. It contains the names of source definitions, target definitions, transformatoins used, column names, data types, business logic written n source to target field matrix, session name, mapping name.
HLDBased on SRS, software analysts will convert the requirements into a usable product.They will design an application, which will help the programmers in coding.In the design process, the product is to be broken into independent modules and then taking each module at a time and then further breaking them to arrive at micro levelsThe HLD document willcontain the following items at a macro level: - list of modules and a brie description of each module - brief functionality of each module - interface relationship among modules -dependencies between modules - database tables identified along with key elements overall architecture diagrams along with technology detailsLLDHLD contains details at macro level and so it cannot be given to programmers as a document for coding.So the system analysts prepare a micro level design document, called LLDThis document describes each and every module in an elaborate manner, so that the programmer can directly code the program based on this.There will be at least 1 document for each module and there may be more for a module.The LLD will contain: - deailed functional logic of the module, in pseudo code - database tables, with all elements, including their type and size - all interface details with complete API references(both requests and responses) - all dependency issues error message listings - complete input and outputs for a module(courtesy 'anonimas') HHD is the first output in your system design phase(in SDLC).Here we design the overall architecture of the system.The main functional or all the core modules are given shape here.This also include contr0l flow b/w main modules,e-r status etc. main out-put's are E-r diagram,flow chart,DFD's etc LLD we create more detail and specific design of the system.how exactly we make the dB structure,interface design etc Main output's are DB's schema,frameworks,Interface desins etc For people who have been involved in software projects, they will constantly hear the terms, High Level Design (HLD) and Low Level Design (LLD). So what are the differences between these 2 design stages and when are they respectively used ? High Level Design (HLD) gives the overall System Design in terms of Functional Architecture and Database design. It designs the over all architecture of the entire system from main module to all sub module. This is very useful for the developers to understand the flow of the system. In this phase design team, review team (testers) and customers plays a major role. For this the entry criteria are the requirement document that is SRS. And the exit criteria will be HLD, projects standards, the functional design documents, and the database design document. Further, High level deign gives the overview of the development of product. In other words how the program is going to be divided into functions, modules, subdivision etc. Low Level Design (LLD): During the detailed phase, the view of the application developed during the high level design is broken down into modules and programs. Logic design is done for every program and then documented as program specifications. For every program, a unit test plan is created. The entry criteria for this will be the HLD document. And the exit criteria will the program specification and unit test plan (LLD). The Low Level Design Document gives the design of the actual program code which is designed based on the High Level Design Document. It defines Internal logic of corresponding submodule designers are preparing and mapping individual LLDs to Every module. A good Low Level Design Document developed will make the program very easy to be developed by developers because if proper analysis is made and the Low Level Design Document is prepared then the code can be developed by developers directly from Low Level Design Document with minimal effort of debugging and testing.
Alternate Index
What is the use of Alternate Index? Is using alternate index in file processing fast Alternate Index is used to access the records from the file using alternate key, when there is no primary key avaliable. But using this index accessing the records is slow. Because, alternate index format is Alternate key and primary key. So by using alternate key we will get primary key from the Alternate Index file, from there we will search the file using the primary key. Using Alternate Index the accessing is slow. Question: 6 of 707
Question: 8 of 38
Informatica Architecture
Can you Explain about Informatica Architecture ? and what difference between service based and service oriented ? |----------------------------|---------------------------------|-------------------| | SOURCES |client tools | TARGET | | like ,oracle,db2 |-------------------------------- | | | |powercenter repositery | | | |------------------------------- | | |________________|repositery server | | --------------------------------|-------------------|
Question: 2 of 73
Closing 1 excel when multiple instances of excel are open during runtime in QTP
I have written a VB script to batch run the QTP scripts. My VB script takes input from the "ControlFile" excel to get the name of the test script to open and execute in QTP. So I need to keep this "ControlFile" excel open throughout the execution of all the scripts in the batch. The problem is my scripts open some excel for comparison and when they are closed with appexcel.Quit, even my "ControlFile" excel closes and hence I am unable to get the scriptnames after that. The execution stops here. Can anyone please help me with this, to close 1 particular instance of an excel during runtime. Thanks in Advance for the help!
SQL> select object_id, object_type,object_subtype, object_name, owner_id, create_info ,code_page,opb_object_id,owner_id, group_id , last_saved, objversion, comp_version from opb_cnx; 2 3
OBJECT_ID OBJECT_TYPE OBJECT_SUBTYPE OBJECT_NAME OWNER_ID CREATE_INFO CODE_PAGE OPB_OBJECT_ID OWNER_ID GROUP_ID LAST_SAVED OBJVERSION COMP_VERSION ---------- ----------- -------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------- ----------- ---------- ------------- ---------- ---------- ------------------------------ --------- -----------10 73 101 TUTORIAL_SOURCE 2252 -5.355E+27 -2.01E-100 64 2252 2 ? 0 1 11 73 101 TUTORIAL_TARGET 2252 -5.355E+27 -2.01E-100 64 2252 2 ? 0 1 Can i delete them from the db and recreate them ?
Question: 2 of 385
Organization problems
Consider an organization by which you are familiar. If the organization using file processing system the what are problem organization will face? Explain with suitable examples.
Discuss which is better among incremental load, Normal Load and Bulk load
Incremental load: Incremental means suppose today we processed 100 records ,for tomorrow run u need to extract whatever the records inserted newly and updated after previous run based on last updated timestamp (Yesterday run) this process called as incremental or delta Normal load: In normal load we are processing entire source data into target with constraint based checking Bulk load: In bulk load with out checking constraints in target we are processing entire source data into target.
What are 2 modes of data movement in Informatica Server? The data movement mode depends on whether Informatica Server should process single byte or multibyte character data. This mode selection can affect the enforcement of code page relationships and code page validation in the Informatica Client and Server. a) Unicode - IS allows 2 bytes for each character and uses additional byte for each nonascii character (such as Japanese characters) b) ASCII - IS holds all data in a single byte The IS data movement mode can be changed in the Informatica Server configuration parameters. This comes into effect once you restart the Informatica Server.
If a session fails after loading of 10,000 records in to the target.How can u load the records from 10001 th record when u run the session next time in informatica 6.1?
In Informatica 8.6, recovery feature is improved. Informatica server writes real time recovery info to a queue which helps maintain data integrity during recovery. So, no data is lost or duplicated. Recovery queue stores reader state, commit number and messageID informatica server committed to target. During recovery, informatica server uses recovery info to determine where it stopped processing. The recovery ignore list stores message IDs that IS wrote to target for failed session. Informatica server writes recovery info to the list if there is a chance that source did not receive an acknowledgement. While recovering, informatica server uses recovery ignore list to prevent data duplication. There are three options for recovery. 1. Fail task and continue work flow. 2. Resume from last checkpoint. 3. Restart task.
Can i use a session Bulk loading option that time can i make a recovery to the session?
Nope! when you use bulk loading you can't recover the rows caz it won't write database logs. no,why because in bulk load u wont create redo log file,when u normal load we create redo log file, but in bulk load session performance increases Question: 36 of 707
Question: 39 of 707
Which tool U use to create and manage sessions and batches and to monitor and stop the informatica server?
Informatica Server Manager - Tool that we are using to create and manage Sessions and Batches. Workflow manager is used to create,manage sessions and batches Workflow Monitor is used to monitor, abort/stop the sessions.
Suppose I have one source which is linked into 3 targets.When the workflow runs for the first time only the first target should be populated and the rest two(second and last) should not be populated.When the workflow runs for the second time only the second target should be populated and the rest two(first and last) should not be populated.When the workflow runs for the third time only the third target should be populated and the rest two(first and second) should not be populated.
First create a sequence generator where startwith=1 and maxvalue=3, enable the option "cycle". Make sure cache value is set to 0. In the data flow use expression to collect dataflow ports and add a new port (iteration_no) to collect sequence.nextval. pass this data to router where you need to create 3 groups, first group condition iteration_no=1, second group condition iteration_no=2 and third group condition iteration_no=3. This way each session run will be loading first, second and third target instance in cyclic mode.
total how many joiner transformations needed to join 10 different sources.(recent tcs intervie w question)
its n-1, so you need 9 transformations.
hi... this was asked in acccenture interview can anyone plz tell me the answer what is the difference between informatica 7.1 and 8.1
main difference between informatica 7.x and 8.x are:1. Java transformation is introduced in Informatica 8.x. while was not in 7.x 2.and Power exchange tool also introduced in informatica 8.x while was not in 7.x Push down optimization Target from Transformation UDF concurrently write files Flat file Enhancement 1. 2. 3. 4. 5. 6. 7. Deployment groups Data Masking and HTTP Transformations Grid Support Partitioning based on number of CPUs INSTR, REG_REPLACE string functions LDAP authentication in user management SYSTIMESTAMP
$-This is d symbol for server o inbuilt variable. $$-this is the symbol for the variables o parameters which v create
$ is session parameter e.g $DBConnection $$ is mapping parameter/variable e.g $$LASTRunDate.
1.how to enter same record twice in target table? give me syntax. Router Transf can be used and use the same conditions for both groups which lets all rows pass through. Then insert the same target table 2. how to get particular record from the table in informatica?
using a where clause and to get the records starting with A..write an SQL query in Source Qualifier transformation as.. SELECT * FROM TABLENAME WHERE ENAME LIKE 'A%';
3.how to create primary key only on odd numbers? use Mod function in the aggregator to find odd and even numbers... then filter the records with odd no and use sequence generator 4. how to get the records starting with particular letter like A in informatica?
Use substr(A%) in aggregator trans
Meterialized views are phisical oracle database objects. Using these we can refresh database tables (W r t) timely manner. whereas views are logical database object.if any chages happens in tabl those changes will effect that respective view also.