ArrayList tabs2 = new ArrayList (driver.getWindowHandles());
driver.switchTo().window(tabs2.get(1));
driver.close();
driver.switchTo().window(tabs2.get(0));
1.)I would like to test a flow between an hadoop datalake and teradata tables. The thing is that I am new to these technologies. The data lake is my data source for the datawarehouse I have on teradata. I read about QuerySurge but I'd like to know if it is possible to create my own scripts to test the flows.
Solution:QueryGrid is an offering from Teradata that allows you to create "linked servers" on your Teradata platform. Using these "linked servers" you are able to query data on a Hadoop platform from Teradata. Currently, these types of workloads are intended to be low concurrency. That landscape is evolving fairly quickly and concurrency rates may increase as the technologies evolve and mature.
2.)I am looking for good testing tools for Data ware house / Big data tools in the market and its features. I have to validate the source and target data, logic etc. and to reduce the testing effort.
Solutions:0
I would suggest a CI/AT (Continuous integration/Automated Testing) system, that for the testing aspect, you build source and target extracts of various types, and store them in a reconciliation database, which has automated tests run across it after every load.
Comments
ArrayList tabs2 = new ArrayList (driver.getWindowHandles());
driver.switchTo().window(tabs2.get(1));
driver.close();
driver.switchTo().window(tabs2.get(0));
1.)I would like to test a flow between an hadoop datalake and teradata tables. The thing is that I am new to these technologies. The data lake is my data source for the datawarehouse I have on teradata. I read about QuerySurge but I'd like to know if it is possible to create my own scripts to test the flows.
Solution:QueryGrid is an offering from Teradata that allows you to create "linked servers" on your Teradata platform. Using these "linked servers" you are able to query data on a Hadoop platform from Teradata. Currently, these types of workloads are intended to be low concurrency. That landscape is evolving fairly quickly and concurrency rates may increase as the technologies evolve and mature.
2.)I am looking for good testing tools for Data ware house / Big data tools in the market and its features. I have to validate the source and target data, logic etc. and to reduce the testing effort.
Solutions:0
I would suggest a CI/AT (Continuous integration/Automated Testing) system, that for the testing aspect, you build source and target extracts of various types, and store them in a reconciliation database, which has automated tests run across it after every load.