Hi. I'm trying to create TM1 cube from Cognos Package, because I don't want to throw away time spent with modeling. What is the best practice? I like the idea of TM1 Performance Modeler could extract all metadata (there are some hierarchical dimensions and measures) and re-create with them dimensions and measures in TM1 cube. But if this is fastest way, I'm struggling with some issues:
When using guided import, I set the source as Cognos package. The package has a relation database as underlying datasource (dimensionally modeled relational - DMR). When I want to preview data or see the mapping, Performance modeler stops responding and some Java computation is happening but for a very long time (2 days minimum).
When I repeat this procedure, but on an empty copy of my underlying relational database (just tables are empty), I get my preview (empty columns) and mapping (I believe, that Performance Modeler is pulling data from Cognos and saving them into flat file before show them to me - hence the freeze - slow disk + large table).
When TI process is created with empty DB, I swap the DBs and re-run... Performance Modeler IDE is responding now (with something like progress bar), but there is still long Java processing in background (I think that now there is now flat file middle-step but this is still slow)
Any ideas how to achieve better performance or is there better approach? I can share TurboIntegrator code generated by Performance modeler because I saw on the forums that some low-performance statements could be generated but I did not find them.
1 post • Page 1 of 1