Here, we will discuss about how to incorporate Oracle BI server and make use of the existing Subject Areas built on RPD in Tableau desktop workbook as Data Source.
NOTE: This is applicable for 8.2, 8.3, 9.0 and later versions.
Firstly, to launch an ODBC connection with the Oracle BI Server to access RPD Online, you have to install a copy of Oracle Business Intelligence Developer Client Tools (available from the Oracle website). Following, you can use the same DSN to connect to Tableau through ODBC connections.
As soon as you are done with installing the client, carry out the following steps to create a database source name (DSN):
- Follow the steps mentioned below to add a new system DSN for Oracle BI Server ODBC in the ODBC Administrator tool..
- Go to the System DSN tab and click Add
- Choose the Oracle BI Server DSN, among other available drivers
- Give correct connection details and connect to BI Server
3. Save the System DSN
In total, there are 3 levels in RPD:
- Physical Layer (right pane) – This is the layer where a connection is established between each data source and the raw tables are disclosed. Joins across tables are performed here.
- Business Layer (middle pane) – This is where logical relations, data modelling and hierarchy development are implemented.
- Presentation Layer (left pane) – This is the layer exposed to the business through “subject areas”. The subject areas are clearly modelled to display data in the most easy-to-understand format.
Under Tableau, tables in the presentation layer are adjoined as data sources, only.
Locate the Key Fields in Dimensions and Fact Table in Physical Layer of RPD and disclose the same to respective Presentation Tables in Presentation Layer of Sample Sales Lite Subject Area.
Create a “.tdc” file
Create a Tableau data source customization (.tdc) file to connect Tableau Desktop to an Oracle BI server.
- Open a text editor; copy and paste the below code onto it:
<connection-customization class=’genericodbc’ enabled=’true’ version=’9.3′><vendor name=’Oracle Business Intelligence’ />
<driver name=’Oracle BI Server 1′ />
<customization name=’CAP_SET_ISOLATION_LEVEL_VIA_ODBC_API’ value=’no’ />
<customization name=’CAP_SUPPRESS_DISCOVERY_QUERIES’ value=’no’ />
<customization name=’SQL_CATALOG_USAGE’ value=’0′ />
<customization name=’SQL_SCHEMA_USAGE’ value=’0′ />
<customization name=’CAP_FAST_METADATA’ value=’yes’ />
Nota bene: Version is crucial; adjust the version with that of your Tableau desktop version. In my case, it is 9.3.
Recently, credit risk analysis course in Noida is attracting a lot of attention. If you are looking for an exhaustive credit risk analysis course in Delhi, log into DexLab Analytics. You will surely get what you need!
- Give correct connection details and connect to BI Server
From dreams to reality: a vision to train the youngsters about big data analytics by the young entrepreneurs:
There is a famous saying that a life without any goal is a ship without radar. This saying was actually internalized by the two young Noida based entrepreneurs, Vivek Debuka and Subhrajyoti Adhikary. Thus, Dexlab Analytics, their brain child was born; with the vision of educating young minds regarding big data analytics skills.
According to Mr.Debuka, the CEO and one of the leading figures behind Dexlab Analytics, their thought was of a different yet needful pattern. With the analytics training institutions placed in Gurgaon, Pune and Noida, it was an endeavor to make the country realize about the driving force i.e. the data. Name any kind of work like HR, Management, and Sales or marketing, all have its source in the data analytics. Thus, they are the believers in ‘business is where data lies’.
They believe that Indian education system have rich resources, like, talented and well – read professors and intelligent students .Yet, the lack of a bridge to connect the two is felt. Thus, DexLab has its list of field specific courses, to train young minds accordingly to the needs of the industry. People interested for a career in analytics are trained extensively in SAS, while those in the field of visualization try to grasp Tableau.
Dexlab’s Datacember is another awareness campaign created exclusively for the college students, filled with discounts.
The founders had the idea, but not the proper knowledge to execute it. So, the journey was not a bed of roses. They could actually sense failure when the turning point came hand in hand with the extremely talented and hardworking consultants at Dexlab Analytics. The gems in the crown include consultants from various educational backgrounds and working professionals from companies like Genpact, Bajaj ,HDFC , to name a few .
The owners are very much confident about their idea- ‘data is here to stay’. So, after Gurgaon they have extended their Machine Learning Certification to Noida. They have their simple yet planned economic motto: to earn, save and invest. DexLab tries to keep it as a value oriented aspiration for yielding better results. They are believers that all the future jobs are data driven and this belief pushes them to spread data literacy.
DexLab Analytics is quite a reputable name in the sector of data analytics trainings in India. They are also known for their online live training courses and for their unique offerings of free demo sessions. With data science and analytics being deemed as the sexiest job in the 21st century, it only makes sense that it gathering a lot of attention from young and ambitious youth ready to start their careers.
Until now the field of analytics training was mostly known for people with preliminary or specialized degrees. But DexLab Analytics with their latest winter admission offer called – DexLab’s Datacember is hoping to change that trend. They believe that analytics training should be started from the very beginning of professional education, as the subject is highly complex and needs practice.
They are trying to entice the students with this new promotional admission offer to make the career of data analytics seem lucrative and with a lot of scope for success, which is the fact in the present market. The admission offer can be availed by students of colleges from any relevant fields and will only require the furnishing of a student ID. The courses on which the offers are applicable are the following Business Analyst Certification, Data Science, Big Data Hadoop for Classroom Training only at Gurgaon & Pune.
The offer up for grabs in DexLab’s Datacember drive is that for a single entry of a student, a discount of 10% can be availed on the course fees; but for entry of students 3 or more a discount of 30% each on the admission fees can be obtained.
The date of registration for this offer is from 7th December 2016 to 14th January 2017. The Datacember offer will remain valid for December 2016 to March 2017.
The CEO of DexLab spoke on this initiative by saying, “In light of the same the institute has launched DexLab’s Datacember – a campaign which offers to provide college students with attractive discounts to help them acquire these talents easily.”
Furthermore, the institute also plans to conduct corporate trainings and collaborate with their long-time training partners RCPL along with several hundreds of other colleges and Universities in India; for conducting workshops and corporate trainings on the topics related to data science and Big Data Analytics. This corporate training collaboration is dated to start from the 10th of December and will go on until the 18th December 2016. The training sessions will of 40 hours each and will cover the topics of R Programming, Analytics using R and Data Science.
The first training session is about to take place at an engineering college in Bhubaneswar, Orissa for their client RCPL.
About DexLab Analytics:
Founded by a team of core industry experts with several years of experience in the field of data science and analytics, their institute has been around since the beginning of the Big Data buzz. They are a reputable institute based in Gurgaon and Pune who are known for quality training and placement support for their students.
DexLab Solutions Corp.
G. Road, Gurgaon 122 002, Delhi NCR
+91 852 787 2444
+91 124 450 2444
DexLab Solutions Corp.
Gokhale Road, Model Colony, Pune – 411016
+91 880 681 2444
+91 206 541 2444
For several years now, I have been associated with a company that enables individuals and organizations, implement Data Governance in their systems. An interesting thing that I have come across is that most people presume data governance to be a technical and analytical field. But the truth is, majority of data governance job roles are actually undertaken by business users. I have found out that most of the successful data governance schemes are run as change management initiatives, which are led and supported by the individuals who possess impeccable soft skills.
At the cost of penning clichéd words, things like passion, enthusiasm and the ability to persuade or motivate others to reap achievements for a goal, be it in Data Governance or any other major programme, will definitely work to stack the odds in your favour. For most of you reading this article, the subject of Data Governance may seem too dry and mundane to be passionate about, something that seems too mechanical and uninspiring. But have you ever thought of this, in this way:
If you do not buy, so won’t they!
And that is not a good news for any of the parties involved.
Having strong communication abilities is always a great asset to have. One may need to convince strangers and that too masses of them, influence them on an individual level, when making proposals of new approaches to governing data. History has it, that there is always resistance to change and having soft skills that can soften this blow of transformation, will be a great tool to manage that resistance and transform it into a positive direction.
So, for those who are working on a Data Governance initiative or aspire to work on some form of Data Analytics or Management field, and have a feeling that communication is not your forte. Then i recommend that you let a helpful hand learn it through proper coaching and practice. After all peering at large data sets does not help with social skills! That is definitely understandable. But you must have a strong orator hidden behind those ‘nerd’ glasses who can translate the complexities of data governance and analytics to the language of the commoners.
I strongly recommend that you use a pragmatic approach when implementing Data Governance and decide to apply the same for your soft skills training.
At whatever stage you are currently in your Data Governance and analytics journey do not forget that spending some time to focus on your soft skills will definitely have a significant contribution towards the success of your initiative.
You can learn more about Data Analytics and governance along with soft skill training at DexLab Analytics.
It may be easy to run MapReduce on small datasets without any fuss over coding and fiddling, but the only condition being you know what to do. And here is what you should know:
It may be already known to you that MapReduce works on a conceptual level. In this blog post we will discuss how you can write code that will run on Hadoop and that will start with a MapReduce program using Java.
The development environment:
To begin with we will need Java (we are using Oracle JDK 6), Git, Hadoop and Mit. You can download the latest stable version of Apache Hadoop (1.0.4) online from there release page, and then extract it to a place suitable for you.
This is what we did on our computer:
% tar zxf hadoop-1.0.4.tar.gz
% export HADOOP_HOME=$(pwd)/hadoop-1.0.4
% $HADOOP_HOME/bin/hadoop version
Then visit another directory and find the Git repository that compliments this article:
% git clone git://github.com/tomwhite/hadoop-drdobbs.git
% cd hadoop-drdobbs
% mvn install
You will find that this repository also has a small amount of sample data suitable for testing. Like this:
% cat data/*.tsv
dobbs 2007 20 18 15
dobbs 2008 22 20 12
doctor 2007 545525 366136 57313
doctor 2008 668666 446034 72694
The file has a few lines from the Google Books Ngram Dataset. For clarity here’s what they are: the first line has the word “dobbs” which appeared in the books from 2007 about 20 times almost and these occurrences were noted in more than 18 pages in 15 books.
Writing the Java MapReduce:
To find the total count for each word let us begin to write the MapReduce job. We will begin with the map function which in Java is represented with an instance of org.apache.hadoop.mapreduce.Mapper.
As a first step you must decide about the mapper is the types of the input key-value pairs and the output-key-value pairs. For the declaration of the Mapper class, here it is:
public class Mapper
As we are going to process the text, we will use the TextInputFormat. This will help us determine the input types, like LongWritable and Text, both of these are found in the org.apache.hadoop.io package. These types of Writables act as wrappers around the standard types in Java (in this case they are, long and string), this has been optimized for efficiency of serialization.
But authors of the MapReduce programs can use the Writable types without having to think about serialization. The only time they may need to consider serialization is when they write a custom Writable type. And when in such circumstances it is recommended to use a serialization library like Avro.
Coming back to the input type we can help present the input to our mapper with TextInputFormat as Longwritable, Text and pairs like this:
(0, “dobbs 2007 20 18 15”)
(20, “dobbs 2008 22 20 12”)
(40, “doctor 2007 545525 366136 57313”)
(72, “doctor 2008 668666 446034 72694”)
The key here is to use the offset within the file, and the content of the line is the value. As the mapper it is your job to extract the word along with the number of occurrences and ignore the rest. So, the output would be (word, count) pairs, of type (Text, LongWritable). The signature of the mapper should look like this:
public class ProjectionMapper extends Mapper
Then the only thing left for us to do is to write the implementation of the map() method. The source for the whole mapper class would appear in Listing One (ProjectionMapper.java).
Here’s what the Listing One: ProjectionMapper. Java looks like:
But there are certain things that one must know about this code.
- There are two instance variables, count and word which have been used to store the map output key and value.
- The map () method is known as once per input record, so it works to avoid unnecessary creation of objects.
- The map () body is pretty straightforward: it is used to split the tab-separated input line into the fields. It uses the first field as word and the third one as count.
- The map output is written with the use of the write method in the Context.
For the sake of simplicity we have built the code to ignore the lines with an occurrence field which is not a number, but there are several other actions one could take. However there are some other actions that one could take. For instance, incrementing a MapReduce counter to track how many lines are affected by it. To know more about this see, getCounter() method on Context for more information. After running through our small dataset, the map output would look like this:
You must understand that Hadoop will transform the map output so that values can be brought together for a given key. The process is called the shuffle. For our abstract representation, the inputs or reducing the steps will seem somewhat like this:
(“dobbs”, [20, 22])
(“doctor”, [545525, 668666])
Most of our reduce implementation will have something to do with sum of the counts. We will require an implementation of the org.apache.hadoop.mapreduce. The reducer should be used with the following signature:
public class LongSumReducer extends Reducer<
Text, LongWritable, Text, LongWritable>
We can also try to write the code on our own, but with Hadoop we do not need to, as it comes with an implementation which is shown below in the Listing Two (LongSumReducer.java):
Listing two: LongSumReducer.java (code obtained from Apache Hadoop project) would look like this:
A noteworthy point to be mentioned here is, that reduce () method signature is slightly different from the map () one. That is because it contains an iterator over the values rather than just a single value. This will showcase the grouping that the framework would perform on the values for a key.
The implementation is fairly simple in the LongSumReducer: it sums the values and then writes the total out using the same key as the input.
The output for the reducer will be:
This was the first part of this blog, for the rest of the step follow our next blog post from the premiere SAS training centre in Pune. In the next blog tranche we will reveal the procedures for the Listing three and the actual running of the job.
Interested in a career in Data Analyst?
To learn more about Machine Learning Using Python and Spark – click here.
To learn more about Data Analyst with Advanced excel course – click here.
To learn more about Data Analyst with SAS Course – click here.
To learn more about Data Analyst with R Course – click here.
To learn more about Big Data Course – click here.
Even if you are not a data scientist yet, but there is still data surrounding you and engulfing you in a cloud of structured, specified and targeted data. Data that you use every day on a regular basis and data that actually shapes up your daily routines of commute to work, gym or entertainment. It is like the omnipresent atmosphere that we often take for granted. Why do we say that?
Here is an extract from the life of a non-technical executive of our team, after reading this many of you may feel that this somewhat similar to your story as well.
On an ordinary day, our aforesaid employee gets up in the morning at the ring of his alarm and remembers that his flight will leave at 5 o’ clock that morning. Then he looks at his smart phone and checks the updates on his flight. The flight is on time and the security checks are moving unperturbed. Then he swipes around some more on his smart phone to see if the traffic situation is on his side on this day. He soon finds the traffic is light unlike most other days and decides to cut his commute time very tight expecting himself to reach the airport within 15 minutes. So, he concludes there is ample time for him to leave for the airport at 4:00 am and feeling a sense of confidence about his decision as he made an informed choice so, the chances of things going wrong are low.
Then after his daily ablutions he prepares to set out and opens his Ola/Uber app on phone to call a cab. The app immediately responds with the information that the driver is 2 minutes away. Almost instantaneously the cabbie calls him to understand the precise location of his house and concurs that he will be couple of minutes to get there.
Soon after boarding the cab, our friend opens his health app and connects it via Bluetooth to his smart watch. He notes with a scorn that he is not getting enough exercise and that he only slept 5 hours of deep sleep last night. Then while sitting around being bored in the cab he opens the new Microsoft app that uses your phone camera to look at the picture and guess the age of the face. With further disappointment and in an uncomplimentary way the app gives him a number that is 7 years more than his actual age! But still our executive friend here feels happy as this is a good start of a day. Firstly, because he had the power of data to make educated decisions about some very simple yet troublesome things and two because he got a cab fairly fast.
Now this story may seem like a pretentious rant of pseudo-first world problems, but our point is completely different than the luxurious facilities available to modern urban smart phone owning working class. Our point is to emphasize how almost unknowingly we have let in data into our lives, the myth (and/or fact) of choice is real and we are using it unknowingly while adding and accessing the omnipresent phenomenon of – Big Data.
Yes, Big Data did not just come to office one day and sat in a cabin labelled as “Big Data at work”. This is like electricity a utility that changes our life and influences our decision making ability. Still unconvinced? Then we ask you to conduct a simple survey among your friends. Ask around to know how many people you know buy over-expensing, sub-par quality products without going through the ratings or reviews. If you hadn’t realized it yet, this is what you would like to label as “Big Data at work”.
Thus, in closing thoughts Big Data analytics is the fundamental ability that enables capabilities to people which will effect and transform our daily lives forevermore and what we see today is evidently just the tip of the ice-berg.
So, start your Big Data certification in Pune today, with DexLab Analytics.
We have all established that Big Data is big and all the noise about Big Data is not just hype but reality. With the increase in technology the data generated on Earth is doubling in every 40 months and huge heaps of data keeps coming in from multiple sources. Let’s look at some data to really understand how Big Data is evolving:
- The population of the world is 7 billion, and out of these 7 billion, 5.1 billion people use a smart phone device.
- On an average everyday almost 11 billion texts are sent across the globe.
- The global number of Google searches everyday is 5 billion
But there is an imbalance as we have been creating data but not consuming it enough for proper use. We generate 25 quintillion bytes of data daily through our regular online activities including online communications, online behaviour, video streaming services and much more.
Studies carried out in 2012 showed that the world generated more than 2 zetabytes of data which is roughly equal to 2 trillion gigabytes. By the year 2020, we will generate 35 trillions of data and to manage this growing amount of data we will need 10 times the servers we use now and at least 50 times more data management systems and 75 times the files to manage it all.
The industry is still not equipped to handle such an explosion of data as 80% of it is unstructured data. It is beyond the scope of traditional statistical analysis tools to handle this amount of data as it is too complicated and unorganized.
The talent pool required to effectively manage Big Data will fall short by at least 100 thousand minds as there are only 500 thousand computer scientists but less than 3000 mathematicians. But to truly utilize the complete potential of Big Data we need more human resource and more tools.
The solution to tackle this even bigger problem of Big Data is Big Data Analytics. It is fresh new way of thinking about the company objectives and the strategies created to achieve them. Big Data analytics is the answer behind where the hidden opportunities lie.
SAS, R programming , Hadoop, Pig, Spark and Hive are a few advanced tools that are currently in use in the data analysis industry. SAS experts are higly in demand in the job market recently as it is slowly emerging to be an increasingly popular tool to handle data analysis problems. To learn more about SAS training institutes follow our latest posts in DexLab Analytics.
For more information please read our blog at http://www.dexlabanalytics.com/blog/the-evolution-of-big-data-in-business-decision-making
In India the hottest job locations for a data analyst position according to our pay-scale and job scenario survey are – Gurgaon, Mumbai and Bangalore. For more details on payment packages on offer for various data analysis positions view our infographic with numbers based on industry-based survey.
Related posts :
Though the uses of MS Excel are far varied than that of R Programming when it comes to the world of Big Data, R outperforms Excel by leaps and bounds. Handling data as well as manipulating it, is done far more effectively when the tool of R Programming is used. Watch this presentation if you wish to know the exact reasons that give R Programming a competitive edge.
Related posts :
The market for Business Analytics stood at $42.55 billion in 2014 and is all set to touch $70.11 billion by the year 2022, globally. The industry will witness a CAGR of a considerable 6.44% during the period of 2014 to 2022 for which the forecast is made. The factors that are fuelling the growth of the market include rising demand for analytics by organizations as more and more organizations embrace Big Data, changing the environment in which businesses operate and the choices made by customers with unprecedented swiftness. The things that stand as an obstacle to its growth are factors like the relatively high costs of execution and a general unwillingness to adopt Business Analytics. Other hindrances are severe shortages of skilled workers who have the technical ability to run applications related to Business Analytics.
The lion’s share of the market has been captured by financial services, insurance and banking sector. The ten top vendors of business analytics software together constituted for 70% of the market share all over the world as of 2013. In the year 2013, SAP, Oracle, IBM and Microsoft together sold more than 50% of all sales of software related to business analytics. Tableau earned the distinction as being the fastest growing software company in the category of business analytics in the same year, witnessing a growth of 80% in a single year.
The global market for business analytics is segmented based on application, deployment, end users, software as well as geography. If deployment is considered the market is further segregated to cloud and on-site deployment. If the market is viewed from the perspective of the end user it may be categorized medium and small businesses and large enterprises. From the point of view of application the market for business analytics may be segmented into IT and telecom, media and entertainment, retail, healthcare, manufacturing, energy and power, government, banking, education, insurance and financial services.
According to software the business analytics market globally may be segregated into search and alter, performance and management of big data, predictive analytics, discovery of data, software for visualization, business intelligence and analytics of content. According to geography the markets are North America, Asia Pacific, Rest of the world besides Europe.
The key players in this market are INFOR, IBM, Microsoft, Oracle, Microstrategy Incorporated, Inc., SAS Institute, Tableau, QLIK Technologies and Tibco Software.
Business Analyst Certification Training
If you are interested in this field and are contemplating a career here it is highly advisable that you sign up for a Business Analysis Training in Delhi.
Related posts :