Tuesday, November 5, 2013

REF Cursor with example

REF Cursor is a datatype that holds a cursor value in the same way that a VARCHAR2 variable will hold a string value.
A REF Cursor can be opened on the server and passed to the client as a unit rather than fetching one row at a time. One can use a Ref Cursor as target of an assignment, and it can be passed as parameter to other program units. Ref Cursors are opened with an OPEN FOR statement. In most other ways they behave similar to normal cursors.

Example:

CREATE OR REPLACE FUNCTION fnc_dept_refcursor(p_deptno employees.department_id%type) RETURN SYS_REFCURSOR
as
c_cur SYS_REFCURSOR;
BEGIN
  OPEN c_cur FOR SELECT employee_id,last_name,salary FROM employees WHERE department_id=p_deptno;
  RETURN c_cur;
END;

--==================================================

DECLARE
  c SYS_REFCURSOR;
  TYPE  emp_rec IS RECORD(employee_id employees.employee_id%type,
                          last_name employees.last_name%type,
                          salary employees.salary%type);
  v_rec emp_rec;
BEGIN
  c:=fnc_dept_refcursor(p_deptno=>80); --Change Departmnet_Id as required
  dbms_output.put_line('Emplpyee_ID'||','||'Last_Name'||','||'Salary');
  LOOP
    FETCH c INTO v_rec;
    EXIT WHEN c%NOTFOUND;
    dbms_output.put_line(v_rec.employee_id||','||v_rec.last_name||','||v_rec.salary);
  END LOOP;
END;

Output:
Employee_ID,Last_Name,Salary
145,Russell,14000
146,Partners,13500
147,Errazuriz,12000
148,Cambrault,11000
149,Zlotkey,10500
150,Tucker,10000
151,Bernstein,9500
152,Hall,9000
153,Olsen,8000
154,Cambrault,7500

Sunday, January 2, 2011

Data send automatically into a FTP server by DOS command

Write the following code into a Batch file & click:

@echo off
del /Q DosFtpFile.ftp
echo FTP_USER_NAME> DosFtpFile.ftp
echo FTP_PASSWORD>> DosFtpFile.ftp
FOR %%i IN (*.txt) DO echo put %%i>> DosFtpFile.ftp
echo bye>> DosFtpFile.ftp
ping -n 3 127.0.0.1 >NUL
ftp -s:DosFtpFile.ftp FTP_MACHINE_NAME/IP

The above code will help you to send automatically all *.txt file into a specific FTP server.
Note that, you can change *.txt or *.csv or as your required.

Monday, June 21, 2010

Data Mart

Data Mart:
A data mart is a simple form of a data warehouse that is focused on a single subject (or functional area), such as Sales, Finance, or Marketing. Data marts are often built and controlled by a single department within an organization. Given their single-subject focus, data marts usually draw data from only a few sources. The sources could be internal operational systems, a central data warehouse, or external data.

Another definition of Data Mart:
A data mart is a particular subject oriented block of data in the data warehouse in the business line like production, sales, marketing etc. they are two kinds of data marts are there, one is Independent & Dependent data marts.
What is Metadata?
Metadata is information about the data. For a data mart, metadata includes:

01. A description of the data in business terms
02. Format and definition of the data in system term
03. Data sources and frequency of refreshing data

How Is It Different from a Data Warehouse?

A data warehouse, unlike a data mart, deals with multiple subject areas and is typically implemented and controlled by a central organizational unit such as the corporate Information Technology (IT) group. Often, it is called a central or enterprise data warehouse. Typically, a data warehouse assembles data from multiple source systems.

Dependent and Independent Data Marts:

There are two basic types of data marts: dependent and independent. The categorization is based primarily on the data source that feeds the data mart. Dependent data marts draw data from a central data warehouse that has already been created. Independent data marts, in contrast, are standalone systems built by drawing data directly from operational or external sources of data, or both.

The main difference between independent and dependent data marts is how you populate the data mart; that is, how you get data out of the sources and into the data mart. This step, called the Extraction-Transformation-and Loading (ETL) process, involves moving data from operational systems, filtering it, and loading it into the data mart.

With dependent data marts, this process is somewhat simplified because formatted and summarized (clean) data has already been loaded into the central data warehouse. The ETL process for dependent data marts is mostly a process of identifying the right subset of data relevant to the chosen data mart subject and moving a copy of it, perhaps in a summarized form.

With independent data marts, however, you must deal with all aspects of the ETL process, much as you do with a central data warehouse. The number of sources is likely to be fewer and the amount of data associated with the data mart is less than the warehouse, given your focus on a single subject.


Implementing Steps of a Data mart:
Steps are:

 Designing

 Constructing

 Populating

 Accessing

 Managing


Designing:
The design step is first in the data mart process. This step covers all of the tasks from initiating the request for a data mart through gathering information about the requirements, and developing the logical and physical design of the data mart.
The design step involves the following tasks:

01. Gathering the business and technical requirements.
02. Identifying data sources.
03. Selecting the appropriate subset of data.
04. Designing the logical and physical structure of the data mart.


Constructing:
This step includes creating the physical database and the logical structures associated with the data mart to provide fast and efficient access to the data.
This step involves the following tasks:
01. Creating the physical database and storage structures, such as tablespaces, associated with the data mart.
02. Creating the schema objects, such as tables and indexes defined in the design step.
03. Determining how best to set up the tables and the access structures.

Populating:
The populating step covers all of the tasks related to getting the data from the source, cleaning it up, modifying it to the right format and level of detail, and moving it into the data mart.

The populating step involves the following tasks:

01. Mapping data sources to target data structures.
02. Extracting data.
03. Cleansing and transforming the data.
04. Loading data into the data mart.
05. Creating and storing metadata.

Accessing:
The accessing step involves putting the data to use: querying the data, analyzing it, creating reports, charts, and graphs, and publishing these. Typically, the end user uses a graphical front-end tool to submit queries to the database and display the results of the queries.

The accessing step requires that you perform the following tasks:

01. Set up an intermediate layer for the front-end tool to use. This layer, the metalayer, translates database structures and object names into business terms, so that the end user can interact with the data mart using terms that relate to the business function.
02. Maintain and manage these business interfaces.
03. Set up and manage database structures, like summarized tables that help queries submitted through the front-end tool execute quickly and efficiently.

Managing:
This step involves managing the data mart over its lifetime. In this step, you perform management tasks such as the following:

01. Providing secure access to the data.
02. Managing the growth of the data.
03. Optimizing the system for better performance.
04. Ensuring the availability of data even with system failures.

Tuesday, April 20, 2010

INTERVAL Problem with DBMS_JOB Package

The interval part of the job is analyzed AFTER the job is executed. If you want the job to run at the top of the hour, and you specified an interval of 'sysdate+1/24'... recognize that the value for sysdate is going to be evaluated AFTER the job completes... so if the elapsed time of the job is 5 minutes, the job will be executed at times like 1:00, 2:05, 3:10, 4:15, 5:20, etc. If you want the job to run on the hour, every hour, then you need to account for a possible delay in the start of the job, as well as the elapsed run time of the job in the value of sysdate.Just make sure that expression used for the interval results in the value that you want e.g. to schedule a job to run at the top of each hour, specify interval as 'trunc(sysdate,''HH24'')+1/24'.

Examples:

Run every day at 8.00 in the morning:

dbms_job.submit(:jno,'xyz;',trunc(sysdate)+1/3,'trunc(sysdate+1)+1/3') ;
Example For Choosing date & time:

---------------------------------

select trunc(sysdate+1)+1/24*8 "08:00 AM",

trunc(sysdate+1)+1/24*9 "09:00 AM",

trunc(sysdate+1)+1/24*10 "10:00 AM"

from dual;





Sample Output:

==========



08:00 AM                          09:00 AM                             10:00 AM
-----------                          -----------                             -----------
4/20/2010 8:00:00 AM       4/20/2010 09:00:00 AM       4/20/2010 10:00:00 AM

For more see the following link :
http://www.pafumi.net/dbms_job.htm

Wednesday, March 24, 2010

What is SQL?

What is SQL?

SQL is pronounced “Ess-Queue-Ell” or “Sequel,” depending on the source and location of your training. The meaning of the acronym SQL is Structured Query Language. SQL is the language used to access data from a relational database, such as Oracle Database. SQL was originally devised as a computer language with the development of the IBM relational database called System R. The idea of SQL originates from the relational data model. SQL is used to access subsets of data, in the form of rows or tuples from a larger data set. A data set is represented by a related group of entities, in a data model of a set of tables, in an Oracle Database schema.


What is DUAL Table?

Every DML statement creates an implicit cursor. A cursor is an area in memory allocated for the results of an SQL statement. SELECT statements require a source table for the implicit cursor to operate on. The DUAL table is required as a dummy table, to allow the SELECT statement to retrieve data not stored in a table, view, or other database object. The DUAL table is a repository for an expression result applied to a single value. The DUAL table acts as a temporary repository for the result of an expression. The result of the expression is selected from the DUAL table, as in the following examples:

SELECT * FROM DUAL;

SELECT SYSDATE FROM DUAL;

SELECT USER FROM DUAL;

SELECT 'This is a string' FROM DUAL;


What is NULL value?

There are several things to remember about NULL:

Ø  NULL represents nothing.
Ø  Not even a space character is NULL.
Ø  NULL is not the same as zero.
Ø  NULL values are not always created in indexes.
Ø  Most functions return NULL when passed a NULL value.
Ø  Test for NULL using IS [NOT] NULL.
Ø  An expression containing a NULL returns NULL.
Ø  Use the NVL(value, replace) function to account for NULL values in expressions.
Ø  NULL values sort as the highest value.


  • What Is the Parser?

  • The meaning of the word parsing is that of syntactical and perhaps grammatical or semantic analysis of a sentence. In programming terms, parsing is a syntax check on a line of program code and between related lines. SQL statements are lines of program code, so the parser is effectively a compiler of SQL statements; much like a C compiler is a compiler or syntax analyzer of C programming language commands.

    Parses can be split into two categories:

    Hard parse: No match is found in the shared pool for a SQL statement never before submitted to the database engine.

    Soft parse: A match is found for a SQL statement, perhaps submitted to the database engine by a different session.

    Note: Hard parses are more expensive in system resources than soft parses, but reparsing is best avoided altogether if possible.


Why is a COMMIT statement faster than a ROLLBACK statement?
What occurs in the database during the processing of these statements?

Ø  Before COMMIT or ROLLBACK
§  Redo logs are written.
§   The database is changed.
§   Rollback is written.
Ø  On COMMIT
§  Rollback is deleted.
Ø  On ROLLBACK
§  Rollback is recorded in the redo logs.
§  Rollback is applied to the database.
§  Rollback is deleted.


How to tune a JOIN?

There are several factors to consider:

  • Use equality first.
  • Use range operators where equality does not apply.
  • Avoid use of negatives in the form of != or NOT.
  • Avoid LIKE pattern matching.
  • Try to retrieve specific rows, and in small numbers.
  • Filter from large tables first to reduce the number of rows joined. Retrieve tables in order from the most highly filtered table downward, preferably the largest table, which has the most filtering applied.
  • Use indexes wherever possible, except for very small tables.
  • Let the optimizer do its job.
  • Materialized views and query rewrite. Materialized views are somewhat out of the scope of OLTP databases, but perhaps not so in the future.


  • Ref: Oracle® Performance Tuning for 10gR2 (Second Edition)
    By Gavin Powell
     

Thursday, March 11, 2010

To Get User Environment Value From ORACLE Database Using USERENV Parameter.

Select SYS_CONTEXT('USERENV','TERMINAL') terminal,
SYS_CONTEXT('USERENV','LANGUAGE') language,
SYS_CONTEXT('USERENV','SESSIONID') sessionid,
SYS_CONTEXT('USERENV','INSTANCE') instance,
SYS_CONTEXT('USERENV','ENTRYID') entryid,
SYS_CONTEXT('USERENV','ISDBA') isdba,
SYS_CONTEXT('USERENV','NLS_TERRITORY') nls_territory,
SYS_CONTEXT('USERENV','NLS_CURRENCY') nls_currency,
SYS_CONTEXT('USERENV','NLS_CALENDAR') nls_calendar,
SYS_CONTEXT('USERENV','NLS_DATE_FORMAT') nls_date_format,
SYS_CONTEXT('USERENV','NLS_DATE_LANGUAGE') nls_date_language,
SYS_CONTEXT('USERENV','NLS_SORT') nls_sort,
SYS_CONTEXT('USERENV','CURRENT_USER') current_user,
SYS_CONTEXT('USERENV','CURRENT_USERID') current_userid,
SYS_CONTEXT('USERENV','SESSION_USER') session_user,
SYS_CONTEXT('USERENV','SESSION_USERID') session_userid,
SYS_CONTEXT('USERENV','PROXY_USER') proxy_user,
SYS_CONTEXT('USERENV','PROXY_USERID') proxy_userid,
SYS_CONTEXT('USERENV','DB_DOMAIN') db_domain,
SYS_CONTEXT('USERENV','DB_NAME') db_name,
SYS_CONTEXT('USERENV','HOST') host,
SYS_CONTEXT('USERENV','OS_USER') os_user,
SYS_CONTEXT('USERENV','EXTERNAL_NAME') external_name,
SYS_CONTEXT('USERENV','IP_ADDRESS') ip_address,
SYS_CONTEXT('USERENV','NETWORK_PROTOCOL') network_protocol,
SYS_CONTEXT('USERENV','BG_JOB_ID') bg_job_id,
SYS_CONTEXT('USERENV','FG_JOB_ID') fg_job_id,
SYS_CONTEXT('USERENV','AUTHENTICATION_TYPE') authentication_type,
SYS_CONTEXT('USERENV','AUTHENTICATION_DATA') authentication_data,
SYS_CONTEXT('USERENV','CURRENT_SQL') current_sql,
SYS_CONTEXT('USERENV','CLIENT_IDENTIFIER') client_identifier,
SYS_CONTEXT('USERENV','GLOBAL_CONTEXT_MEMORY') global_context_memory
From Dual;

For more information, please Click Here.

Thursday, March 4, 2010

Sequence, Nextval & Currval

Sequence:

A sequence is a database object that generates unique numbers, mostly used for primary key values. One can select the NEXTVAL and CURRVAL from a sequence. Selecting the NEXTVAL will automatically increment the sequence.

Another Definition,
A sequence is a highly scalable, non-blocking, generator that generates unique numbers.

Nextval:
If NEXTVAL is invoked on a sequence, it makes sure that a unique number is generated.

Currval:
CURRVAL can only be used if a session has already called NEXTVAL on a trigger or by sql, CURRVAL will then return the same number that was generated with NEXTVAL.


Create a simple sequence:

CREATE SEQUENCE empseq;
 Sequence created.

Selecting from the sequence:
SQL> select empseq.nextval from dual;
   NEXTVAL
----------
         1

SQL> select empseq.nextval from dual;
   NEXTVAL
----------
         2

Note that nextval and currval returns the same value for each row of a select:

SQL> select empseq.nextval, empseq.currval from dual;
   NEXTVAL    CURRVAL  
   ----------           ----------
3                                            3         

In PL/SQL, up to Oracle 10g, nextval and currval have to be selected from dual:

declare
  n number;
begin

  select sequence_test.nextval into n from dual;

  dbms_output.put_line(n);

end
/


Apparently, in Oracle 11g, one is not required anymore to select from dual:

declare
  n number;
begin

  n := sequence_test.nextval;

  dbms_output.put_line(n);

end
/

Wednesday, March 3, 2010

Index, Bitmap Index, B-Tree Index

What is Index?

An index is a performance-tuning method of allowing faster retrieval of records. An index creates an entry for each value that appears in the indexed columns. By default, Oracle creates B-tree indexes.

Oracle's two major index types are Bitmap indexes and B-Tree indexes. B-Tree indexes are the regular type that OLTP systems make much use of, and bitmap indexes are a highly compressed index type that tends to be used primarily for data warehouses.

What is Bitmap Index?

A bitmap index is an indexing method that can provide both performance benefits and storage savings. Bitmap indexes are particularly useful for data warehousing environments because data is usually updated less frequently and ad hoc queries are more common.

Bit-mapped indexes are best for “low-cardinality” data (such as a column to indicate a person’s gender, which contains only two possible values: MALE and FEMALE).

Characteristic of Bitmap Indexes:
01.  For columns with very few unique values (low cardinality)
02.  Tables that have no or little insert/update are good candidates (static data in warehouse)
03.  Stream of bits: each bit relates to a column value in a single row of table.

What is B-Tree Index?
B-tree indexes are usually created on columns containing the most unique values. It is an ordered set of entries in which each entry a search key value and a pointer to a specific row with that value. When a server finds a search key value matching a constraint, the pointer is used to fetch the row.


Difference between B-Tree & Bitmap Index

Table Function in Oracle

Table Function:

Table functions are functions that produce a collection of rows (either a nested table or a varray) that can be queried like a physical database table or assigned to a PL/SQL collection variable. You can use a table function like the name of a database table, in the FROM clause of a query, or like a column name in the SELECT list of a query.

A table function can take a collection of rows as input. An input collection parameter can be either a collection type (such as a VARRAY or a PL/SQL table) or a REF CURSOR.

What Are Table Functions?

Table functions return a collection type instance representing rows in a table. They can be queried like a table by calling the function in the FROM clause of a query, enclosed by the TABLE keyword. They can be assigned to a PL/SQL collection variable by calling the function in the SELECT list of a query.


For Example:

The collection type BookSet_t is defined as:

CREATE TYPE Book_t AS OBJECT
( name VARCHAR2(100),
  author VARCHAR2(30),
  abstract VARCHAR2(1000));
CREATE TYPE BookSet_t AS TABLE OF Book_t;

The CLOBs are stored in a table Catalogs:

CREATE TABLE Catalogs
( name VARCHAR2(30),
  cat CLOB);

Function GetBooks is defined as follows:

CREATE FUNCTION GetBooks(a CLOB) RETURN BookSet_t;

The query below returns all the catalogs and their corresponding book listings.

SELECT c.name, Book.name, Book.author, Book.abstract
  FROM Catalogs c, TABLE(GetBooks(c.cat)) Book;


See another Link for Table Function: Link-1 or Link-2 



Pipelined Function:
A pipelined function is one that returns a result set as a collection, but does so iteratively.

Example:

CREATE TYPE pet_t IS OBJECT (
   NAME                          VARCHAR2 (60),
   breed                         VARCHAR2 (100),
   dob                           DATE);
/

CREATE TYPE pet_nt IS TABLE OF pet_t;
/

CREATE OR REPLACE FUNCTION pet_family (dad_in IN pet_t, mom_in IN pet_t)
    RETURN pet_nt
    PIPELINED
IS
    l_count PLS_INTEGER;
    retval   pet_nt := pet_nt ();
BEGIN
    PIPE ROW (dad_in);
    PIPE ROW (mom_in);
 
    IF mom_in.breed = 'RABBIT'
    THEN
       l_count := 12;
    ELSIF mom_in.breed = 'DOG'
    THEN
       l_count := 4;
    ELSIF mom_in.breed = 'KANGAROO'
    THEN
       l_count := 1;
    END IF;
 
    FOR indx IN 1 .. l_count
    LOOP
       PIPE ROW (pet_t ('BABY' || indx, mom_in.breed, SYSDATE));
    END LOOP;
 
    RETURN;
END;


SELECT *
  FROM TABLE (pet_family (
                    pet_t ('Bob', 'KANGAROO', SYSDATE),
                    pet_t ('Sally', 'KANGAROO', SYSDATE)
                 )
              );



OutPut:

NAME     BREED    DOB

Bob      KANGAROO    3/3/2010 11:24:41 AM
Sally    KANGAROO    3/3/2010 11:24:41 AM
BABY1    KANGAROO    3/3/2010 11:24:41 AM
 

Tuesday, February 23, 2010

Compile report rdf into rep with a batch.

REM WINDOWS COMPILE REPORT
::compile_report.bat
cls
Echo compiling Report .....
rem ---------------------------
rem begin command line argument
rem ---------------------------
for %%f IN (*.RDF) do rwconverter userid=stlbas/stlbas@stlbas_11 batch=yes source=%%f stype=rdffile DTYPE=repFILE compile_all=yes OVERWRITE=yes logfile=log.txt dest='D:\Report\REP\%%f'
rem -------------------------
rem end command line argument
rem -------------------------
ECHO FINISHED COMPILING