Quantcast
Channel: Donghua's Blog - DBAGlobe
Viewing all 604 articles
Browse latest View live

How to fix “The database principal owns a schema in the database, and cannot be dropped.”

$
0
0

PS C:\Users\Administrator> sqlcmd -S  .
1> use DB1
2> go
Changed database context to 'DB1'.

1> drop user U1
2> go
Msg 15138, Level 16, State 1, Server WIN-922S55M9QDP, Line 1
The database principal owns a schema in the database, and cannot be dropped.

1> select  name from sys.schemas where principal_id=DATABASE_PRINCIPAL_ID('U1')
2> go
name
----------------------------------------------------
db_ddladmin
db_datareader
db_datawriter

(3 rows affected)


1> alter authorization on schema::db_ddladmin to dbo
2> go

1> alter authorization on schema::db_datareader to dbo
2> go

1> alter authorization on schema::db_datawriter to dbo
2> go

1> select  name from sys.schemas where principal_id=DATABASE_PRINCIPAL_ID('U1')
2> go
name
----------------------------------------------------

(0 rows affected)
1> drop user u1
2> go


Using SQL Developer to connect database via SSH Tunnelling

$
0
0

1. Below is the database server: SID: orcl1, I have normal unix account “donghua” to access the server via ssh.

clip_image002

2. Create the connection string profile. The hostname is “localhost” because the connection is tunnelling via SSH to the server, rather than remotely.

clip_image003

3. Click “Advanced” in above screenshot, enter SSH details. It’s possible to use SSH private key to automate the login. (Not used in my testing)

clip_image004

4. When connect to the database, it will promote for SSH password. (Since I already saved database password, it will not ask DB password here).

clip_image005

5. Connected to database. You can work with the GUI interface rather than SQLPlus now.

clip_image007

Labeling disks on Oracle Solaris for ASM diskgroup candidate disks (using format)

$
0
0

format> disk


AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <ATA-VBOX HARDDISK-1.0-62.76GB>
          /pci@0,0/pci8086,2829@d/disk@0,0
       1. c1t2d0 <ATA-VBOX HARDDISK-1.0-30.00GB>
          /pci@0,0/pci8086,2829@d/disk@2,0
       2. c2t0d0 <DEFAULT cyl 2085 alt 2 hd 255 sec 63 cyl 2085 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@0,0
       3. c2t1d0 <VBOX-HARDDISK-1.0 cyl 2086 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@1,0
       4. c2t2d0 <VBOX-HARDDISK-1.0 cyl 2086 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@2,0
       5. c2t3d0 <VBOX-HARDDISK-1.0 cyl 2086 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@3,0
       6. c2t4d0 <VBOX-HARDDISK-1.0 cyl 2086 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,8000@14/sd@4,0
Specify disk (enter its number)[2]: 3
selecting c2t1d0
[disk formatted]
No Solaris fdisk partition found.

format> partition
WARNING - This disk may be in use by an application that has
          modified the fdisk table. Ensure that this disk is
          not currently in use before proceeding to use fdisk.
format> fdisk
No fdisk table exists. The default partition for the disk is:

  a 100% "SOLARIS System" partition

Type "y" to accept the default partition,  otherwise type "n" to edit the
partition table.
y

format> partition


PARTITION MENU:
        0      - change `0' partition
        1      - change `1' partition
        2      - change `2' partition
        3      - change `3' partition
        4      - change `4' partition
        5      - change `5' partition
        6      - change `6' partition
        7      - change `7' partition
        select - select a predefined table
        modify - modify a predefined partition table
        name   - name the current table
        print  - display the current table
        label  - write partition map and label to the disk
        !<cmd> - execute <cmd>, then return
        quit

partition> print
Current partition table (default):
Total disk cylinders available: 2085 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)           0
  1 unassigned    wm       0               0         (0/0/0)           0
  2     backup    wu       0 - 2084       15.97GB    (2085/0/0) 33495525
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       0               0         (0/0/0)           0
  7 unassigned    wm       0               0         (0/0/0)           0
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 unassigned    wm       0               0         (0/0/0)           0

       
partition> 6
Part      Tag    Flag     Cylinders        Size            Blocks
  6 unassigned    wm       0               0         (0/0/0)           0

Enter partition id tag[unassigned]:
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 2 <--- can be 3 as well, just to skip cyl 0 to avoid VTOC
Enter partition size[0b, 0c, 2e, 0.00mb, 0.00gb]: 2082c
partition> print
Current partition table (unnamed):
Total disk cylinders available: 2085 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders        Size            Blocks
  0 unassigned    wm       0               0         (0/0/0)           0
  1 unassigned    wm       0               0         (0/0/0)           0
  2     backup    wu       0 - 2084       15.97GB    (2085/0/0) 33495525
  3 unassigned    wm       0               0         (0/0/0)           0
  4 unassigned    wm       0               0         (0/0/0)           0
  5 unassigned    wm       0               0         (0/0/0)           0
  6 unassigned    wm       2 - 2083       15.95GB    (2082/0/0) 33447330
  7 unassigned    wm       0               0         (0/0/0)           0
  8       boot    wu       0 -    0        7.84MB    (1/0/0)       16065
  9 unassigned    wm       0               0         (0/0/0)           0

partition> label
Ready to label disk, continue? y

root@solaris:~# /usr/sbin/prtvtoc /dev/rdsk/c2t0d0s2
* /dev/rdsk/c2t0d0s2 partition map
*
* Dimensions:
*     512 bytes/sector
*      63 sectors/track
*     255 tracks/cylinder
*   16065 sectors/cylinder
*    2087 cylinders
*    2085 accessible cylinders
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector
*       16065     16065     32129
*    33479460     16065  33495524
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       2      5    01          0  33495525  33495524
       6      0    00      32130  33447330  33479459
       8      1    01          0     16065     16064

root@solaris:~# ls -l /dev/rdsk/c2t*d0s6
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t0d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@0,0:g,raw
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t1d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@1,0:g,raw
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t2d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@2,0:g,raw
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t3d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@3,0:g,raw
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t4d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@4,0:g,raw
root@solaris:~# chown oracle:dba /dev/rdsk/c2t*d0s6
root@solaris:~# chmod 660 /dev/rdsk/c2t*d0s6

image

Create user friendly device alias on Solaris for ASM

$
0
0


oracle@solaris:~$ ls -l /dev/rdsk/c2t*d0s6
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t0d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@0,0:g,raw
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t1d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@1,0:g,raw
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t2d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@2,0:g,raw
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t3d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@3,0:g,raw
lrwxrwxrwx   1 root     root          50 Mar  2 17:10 /dev/rdsk/c2t4d0s6 -> ../../devices/pci@0,0/pci1000,8000@14/sd@4,0:g,raw


oracle@solaris:~$ ls -l /devices/pci@0,0/pci1000,8000@14/*g,raw |grep oracle
crw-rw----   1 oracle   dba      208, 198 Mar  6 21:37 /devices/pci@0,0/pci1000,8000@14/sd@0,0:g,raw
crw-rw----   1 oracle   dba      208, 262 Mar  6 21:37 /devices/pci@0,0/pci1000,8000@14/sd@1,0:g,raw
crw-rw----   1 oracle   dba      208, 326 Mar  6 21:37 /devices/pci@0,0/pci1000,8000@14/sd@2,0:g,raw
crw-rw----   1 oracle   dba      208, 390 Mar  6 21:37 /devices/pci@0,0/pci1000,8000@14/sd@3,0:g,raw
crw-rw----   1 oracle   dba      208, 454 Mar  6 21:37 /devices/pci@0,0/pci1000,8000@14/sd@4,0:g,raw

root@solaris:/u01/asmdisks# mknod disk1 c 208 198
root@solaris:/u01/asmdisks# mknod disk2 c 208 262
root@solaris:/u01/asmdisks# mknod disk3 c 208 326
root@solaris:/u01/asmdisks# mknod disk4 c 208 390
root@solaris:/u01/asmdisks# mknod disk5 c 208 454
root@solaris:/u01/asmdisks# chmod 660 disk*
root@solaris:/u01/asmdisks# chown oracle:dba disk*
root@solaris:/u01/asmdisks# ls -l disk*
crw-rw----   1 oracle   dba      208, 198 Mar  6 22:55 disk1
crw-rw----   1 oracle   dba      208, 262 Mar  6 23:04 disk2
crw-rw----   1 oracle   dba      208, 326 Mar  6 23:05 disk3
crw-rw----   1 oracle   dba      208, 390 Mar  6 23:05 disk4
crw-rw----   1 oracle   dba      208, 454 Mar  6 23:05 disk5
root@solaris:/u01/asmdisks#

oracle@solaris:/u01/asmdisks$  kfod status=TRUE disks=all dscvgroup=true asm_diskstring=/u01/asmdisks/
--------------------------------------------------------------------------------
Disk          Size Header    Path                                     Disk Group   User     Group
================================================================================
   1:      16331 Mb MEMBER    /u01/asmdisks/disk1                      DATA         oracle   dba
   2:      16331 Mb MEMBER    /u01/asmdisks/disk2                      DATA         oracle   dba
   3:      16331 Mb MEMBER    /u01/asmdisks/disk3                      DATA         oracle   dba
   4:      16331 Mb CANDIDATE /u01/asmdisks/disk4                      #            oracle   dba
   5:      16331 Mb CANDIDATE /u01/asmdisks/disk5                      #            oracle   dba
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME
================================================================================
      +ASM /u01/app/oracle/product/12.1.0/grid
oracle@solaris:/u01/asmdisks$  kfod status=TRUE disks=all dscvgroup=true
--------------------------------------------------------------------------------
Disk          Size Header    Path                                     Disk Group   User     Group
================================================================================
   1:      16331 Mb MEMBER    /dev/rdsk/c2t0d0s6                       DATA         oracle   dba
   2:      16331 Mb MEMBER    /dev/rdsk/c2t1d0s6                       DATA         oracle   dba
   3:      16331 Mb MEMBER    /dev/rdsk/c2t2d0s6                       DATA         oracle   dba
   4:      16331 Mb CANDIDATE /dev/rdsk/c2t3d0s6                       #            oracle   dba
   5:      16331 Mb CANDIDATE /dev/rdsk/c2t4d0s6                       #            oracle   dba
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME
================================================================================
      +ASM /u01/app/oracle/product/12.1.0/grid

Command line references for kfod:


_asm_a/llow_only_raw_disks              KFOD allow only raw devices [_asm_allow_only_raw_disks=(TRUE)/FALSE]
_asm_l/ibraries         ASM Libraries[_asm_libraries=lib1,lib2,...]
_asms/id                ASM Instance[_asmsid=sid]
_b/oot          Running in pre-install env (boot=TRUE/FALSE)
_f/lexinfo              Provide flexinfo      (_flexinfo=TRUE/FALSE)
_p/atch_lib             Patchlib [_patch_lib=<asmclntsh_path>]
_u/ser          OS Username
asm_/diskstring         ASM Diskstring [asm_diskstring=discoverystring, discoverystring ...]
asmc/ompatibility               Include diskgroup ASM compatibility [asmcompatibility=TRUE/(FALSE)]
cli/ent_cluster         client cluster name
clus_/version           cluster version
clust/er                KFOD cluster [cluster=TRUE/(FALSE)]
db_/unique_name         db_unique_name for ASM instance[db_unique_name=dbname]
dbc/ompatibility                Include diskgroup DB compatibility [dbcompatibility=TRUE/(FALSE)]
disk_/access            Disk access method [disk_access=DIRECT/(INDIRECT)]
disks           Disks to discover [disks=raw,asm,badsize,all]
ds/cvgroup              Include group name [dscvgroup=TRUE/(FALSE)]
f/orce          Force option to delete files (force=TRUE/FALSE)
g/roup          Disks in diskgroup [group=diskgroup]
h/ostlist               hostlist[hostlist=host1,host2,...]
metadata_a/usize                AU Size for Metadata Size Calculation
metadata_c/lients               Client Count for Metadata Size Calculation
metadata_d/isks         Disk Count for Metadata Size Calculation
metadata_n/odes         Node Count for Metadata Size Calculation
metadata_r/edundancy            Redundancy for Metadata Size Calculation
na/me           Include disk name [name=TRUE/(FALSE)]
no/hdr          KFOD header suppression [nohdr=TRUE/(FALSE)]
ol/r            Import credentials to OLR [olr=TRUE/(FALSE)]
op              KFOD options type [OP=DISKS/CANDIDATES/MISSING/GROUPS/INSTS/VERSION/PATCHES/PATCHLVL/CLIENTS/RM/RMVERS/DFLTDSTR/GPNPDSTR/METADATA/CREDCRECLUS/GETCLSTYPE/CREDEXPORT/GETASMGUID/CREDDELCLUS/CREDVERIFY/UPGRADEVERIFY/ALL]
p/file          ASM parameter file [pfile=parameterfile]
r/im_disk_access                Rim disk access method [rim_disk_access=DIRECT/(INDIRECT)]
s/tatus         Include disk header status [status=TRUE/(FALSE)]
v/erbose                KFOD verbose errors [verbose=TRUE/(FALSE)]
w/rap           wrap file for credentials
oracle@solaris:/u01/asmdisks$ kfod asm_diskstring=/u01/asmdisk/*  status=TRUE disk=all
KFOD-00101: LRM error [107] while parsing command line arguments
_asm_a/llow_only_raw_disks              KFOD allow only raw devices [_asm_allow_only_raw_disks=(TRUE)/FALSE]
_asm_l/ibraries         ASM Libraries[_asm_libraries=lib1,lib2,...]
_asms/id                ASM Instance[_asmsid=sid]
_b/oot          Running in pre-install env (boot=TRUE/FALSE)
_f/lexinfo              Provide flexinfo      (_flexinfo=TRUE/FALSE)
_p/atch_lib             Patchlib [_patch_lib=<asmclntsh_path>]
_u/ser          OS Username
asm_/diskstring         ASM Diskstring [asm_diskstring=discoverystring, discoverystring ...]
asmc/ompatibility               Include diskgroup ASM compatibility [asmcompatibility=TRUE/(FALSE)]
cli/ent_cluster         client cluster name
clus_/version           cluster version
clust/er                KFOD cluster [cluster=TRUE/(FALSE)]
db_/unique_name         db_unique_name for ASM instance[db_unique_name=dbname]
dbc/ompatibility                Include diskgroup DB compatibility [dbcompatibility=TRUE/(FALSE)]
disk_/access            Disk access method [disk_access=DIRECT/(INDIRECT)]
disks           Disks to discover [disks=raw,asm,badsize,all]
ds/cvgroup              Include group name [dscvgroup=TRUE/(FALSE)]
f/orce          Force option to delete files (force=TRUE/FALSE)
g/roup          Disks in diskgroup [group=diskgroup]
h/ostlist               hostlist[hostlist=host1,host2,...]
metadata_a/usize                AU Size for Metadata Size Calculation
metadata_c/lients               Client Count for Metadata Size Calculation
metadata_d/isks         Disk Count for Metadata Size Calculation
metadata_n/odes         Node Count for Metadata Size Calculation
metadata_r/edundancy            Redundancy for Metadata Size Calculation
na/me           Include disk name [name=TRUE/(FALSE)]
no/hdr          KFOD header suppression [nohdr=TRUE/(FALSE)]
ol/r            Import credentials to OLR [olr=TRUE/(FALSE)]
op              KFOD options type [OP=DISKS/CANDIDATES/MISSING/GROUPS/INSTS/VERSION/PATCHES/PATCHLVL/CLIENTS/RM/RMVERS/DFLTDSTR/GPNPDSTR/METADATA/CREDCRECLUS/GETCLSTYPE/CREDEXPORT/GETASMGUID/CREDDELCLUS/CREDVERIFY/UPGRADEVERIFY/ALL]
p/file          ASM parameter file [pfile=parameterfile]
r/im_disk_access                Rim disk access method [rim_disk_access=DIRECT/(INDIRECT)]
s/tatus         Include disk header status [status=TRUE/(FALSE)]
v/erbose                KFOD verbose errors [verbose=TRUE/(FALSE)]
w/rap           wrap file for credentials

T-SQL Reference (Querying Microsoft SQL Server 2012 Databases Jump Start)

$
0
0

--- Introducing SQL Server 2012


USE AdventureWorks2012;

SELECT SalesPersonID, YEAR(OrderDate) AS OrderYear
FROM Sales.SalesOrderHeader
WHERE CustomerID = 29974
GROUP BY SalesPersonID, YEAR(OrderDate)
HAVING COUNT(*) > 1
ORDER BY SalesPersonID, OrderYear;

select 1; -- quick way to perform validate performance

SELECT unitprice, OrderQty, (unitprice * OrderQty)
FROM sales.salesorderdetail;

-- AS is the recommended way
SELECT s.unitprice, s.OrderQty, (s.unitprice * s.OrderQty) as TotalCost
FROM sales.salesorderdetail as s;


SELECT s.unitprice, s.OrderQty, (s.unitprice * s.OrderQty)  TotalCost
FROM sales.salesorderdetail s;

 
SELECT s.unitprice, s.OrderQty, TotalCost=(s.unitprice * s.OrderQty) 
FROM sales.salesorderdetail s;
 
-- Advanced SELECT Statements
 
SELECT DISTINCT StoreID
FROM Sales.Customer;

SELECT ProductID, Name, ProductSubCategoryID,
    CASE ProductSubCategoryID
        WHEN 1 THEN 'Beverages'
        ELSE 'Unknown Category'
    END
FROM Production.Product;


SELECT SOH.SalesOrderID,
             SOH.OrderDate,
             SOD.ProductID,
             SOD.UnitPrice,
             SOD.OrderQty
FROM Sales.SalesOrderHeader AS SOH
JOIN Sales.SalesOrderDetail AS SOD
ON SOH.SalesOrderID = SOD.SalesOrderID;

SELECT SOH.SalesOrderID,
             SOH.OrderDate,
             SOD.ProductID,
             SOD.UnitPrice,
             SOD.OrderQty
FROM Sales.SalesOrderHeader AS SOH,
Sales.SalesOrderDetail AS SOD
WHERE SOH.SalesOrderID = SOD.SalesOrderID;

-- Customers that did not place orders:
SELECT CUST.CustomerID, CUST.StoreID, ORD.SalesOrderID, ORD.OrderDate
FROM Sales.Customer AS CUST
LEFT OUTER JOIN Sales.SalesOrderHeader AS ORD
ON CUST.CustomerID = ORD.CustomerID
WHERE ORD.SalesOrderID IS NULL;

-- Combine each row from first table with each row from second table All possible combinations are displayed
SELECT EMP1.BusinessEntityID, EMP2.JobTitle
FROM HumanResources.Employee AS EMP1
CROSS JOIN HumanResources.Employee AS EMP2;

-- Return all employees with ID of employee’s manager when a manager exists (INNER JOIN):
SELECT  EMP.EmpID, EMP.LastName,
        EMP.JobTitle, EMP.MgrID, MGR.LastName
FROM    HR.Employees AS EMP
INNER JOIN HR.Employees AS MGR
ON EMP.MgrID = MGR.EmpID ;

-- Return all employees with ID of manager (OUTER JOIN). This will return NULL for the CEO:
SELECT  EMP.EmpID, EMP.LastName,
      EMP.Title, MGR.MgrID
FROM HumanResources.Employee AS EMP
LEFT OUTER JOIN HumanResources.Employee AS MGR
ON EMP.MgrID = MGR.EmpID;


-- Filter rows for customers to display top 20 TotalDue items
SELECT TOP (20) SalesOrderID, CustomerID, TotalDue
FROM Sales.SalesOrderHeader
ORDER BY TotalDue DESC;

-- Filter rows for customers to display top 20 TotalDue items with ties (output could be more than 20 if ties exist)
SELECT TOP (20) WITH TIES SalesOrderID, CustomerID, TotalDue
FROM Sales.SalesOrderHeader
ORDER BY TotalDue DESC;

-- Filter rows for customers to display top 1% of TotalDue items
SELECT TOP (1) PERCENT SalesOrderID, CustomerID, TotalDue
FROM Sales.SalesOrderHeader
ORDER BY TotalDue DESC;

-- pagination
SELECT * FROM Production.Product ORDER BY ProductID
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY


SELECT CHOOSE (3, 'A', 'B', 'C', 'D', 'E', 'F') AS Result -- Result is 'C'

SELECT PARSE('02/12/2012' AS datetime2 USING 'en-US') AS parse_result;

SELECT format(getdate(), 'yyyy-MM-dd hh:mm:ss', 'en-US') AS format_result;

SELECT GETDATE() as "GETDATE()",
    GETUTCDATE() as "GETUTCDATE",
    CURRENT_TIMESTAMP as "CURRENT_TIMESTAMP",
    SYSDATETIME() as "SYSDATETIME()",
    SYSUTCDATETIME() as "SYSUTCDATETIME()",
    SYSDATETIMEOFFSET() as "SYSDATETIMEOFFSET()";
/*-----------
GETDATE()               GETUTCDATE              CURRENT_TIMESTAMP       SYSDATETIME()               SYSUTCDATETIME()            SYSDATETIMEOFFSET()
----------------------- ----------------------- ----------------------- --------------------------- --------------------------- ----------------------------------
2015-03-08 20:40:31.733 2015-03-08 12:40:31.733 2015-03-08 20:40:31.733 2015-03-08 20:40:31.7268304 2015-03-08 12:40:31.7268304 2015-03-08 20:40:31.7268304 +08:00
-----------
*/

SELECT DATEADD(day,1,'20120212');
-- Returns last day of month as start date, with optional offset
SELECT EOMONTH('20120212');

SELECT DATEDIFF(day,'20120925',SYSDATETIME())

-- CAST is ANSI
SELECT CAST(SYSDATETIME() AS date) AS 'TodaysDate';
SELECT CONVERT(CHAR(8), CURRENT_TIMESTAMP,112) AS ISO_style;

SELECT SalesOrderID, YEAR(OrderDate) AS OrderYear;
FROM Sales.SalesOrderHeader;

SELECT DB_NAME() AS current_database;

-- IIF returns one of two values, depending on a logical test Shorthand for a two-outcome CASE expression
SELECT ProductID, ListPrice,
IIF(ListPrice > 50, 'high', 'low') AS PricePoint
FROM Production.Product;

-- Grouping and Aggregating Data


SELECT COUNT (DISTINCT SalesOrderID) AS UniqueOrders,
AVG(UnitPrice) AS Avg_UnitPrice,
MIN(OrderQty)AS Min_OrderQty,
MAX(LineTotal) AS Max_LineTotal
FROM Sales.SalesOrderDetail;

SELECT * FROM (
    SELECT ROW_NUMBER() OVER (PARTITION BY CustomerId ORDER BY OrderDate) AS RN
    ,*
    From Sales.SalesOrderHeader
) AS a
WHERE RN=1;


SELECT * FROM (
    SELECT RANK () OVER (PARTITION BY CustomerId ORDER BY OrderDate DESC) AS RN
    ,*
    From Sales.SalesOrderHeader
) AS a
WHERE RN=1;
 
 
SELECT * FROM (
    SELECT ROW_NUMBER() OVER (PARTITION BY CustomerId ORDER BY OrderDate) AS RN
    ,*
    From Sales.SalesOrderHeader
) AS a
WHERE RN=1;

SELECT *
    FROM Sales.SalesOrderHeader a
    WHERE a.SalesOrderID=(select top 1 SalesOrderID from Sales.SalesOrderHeader b where a.CustomerID=b.CustomerID order by OrderDate)
order by a.CustomerID;

WITH a AS (
    SELECT ROW_NUMBER() OVER (PARTITION BY CustomerId ORDER BY OrderDate) AS RN
    ,*
    From Sales.SalesOrderHeader
)
SELECT * FROM a where RN=1;


CREATE FUNCTION Sales.fn_LineTotal (@SalesOrderID INT)
RETURNS TABLE
AS
RETURN
    SELECT SalesOrderID,
    CAST((OrderQty * UnitPrice * (1 - SpecialOfferID))
    AS DECIMAL(8, 2)) AS LineTotal
    FROM    Sales.SalesOrderDetail
    WHERE   SalesOrderID = @SalesOrderID ;


DECLARE @OrderId INT=43826;
SELECT *FROM Sales.fn_LineTotal(@OrderId);
GO

-- SET Operators, Windows Functions, and Grouping

-- APPLY is a table operator used in the FROM clause and can be either a CROSS APPLY or OUTER APPLY
-- Operates on two input tables, left and right
-- Right table is often a derived table or a table-valued function
SELECT c.CustomerID
    ,c.AccountNumber
    ,o.*
From Sales.Customer AS c
OUTER APPLY (
    SELECT TOP 5 soh.OrderDate, Soh.SalesOrderID  FROM  sales.SalesOrderHeader AS soh
    WHERE soh.CustomerID=c.CustomerID
    ORDER BY soh.OrderDate DESC
) AS o
WHERE c.TerritoryID=3;


/*
    RANK Returns the rank of each row within the partition of a result set. May include ties and gaps.
    DENSE_RANK    Returns the rank of each row within the partition of a result set. May include ties but will not include gaps.
    ROW_NUMBER Returns a unique sequential row number within partition based on current order.
    NTILE    Distributes the rows in an ordered partition into a specified number of groups. Returns the number of the group to which the current row belongs.
    LAG    Returns an expression from a previous row that is a defined offset from the current row. Returns NULL if no row at specified position.
    LEAD    Returns an expression from a later row that is a defined offset from the current row. Returns NULL if no row at specified position.
    FIRST_VALUE    Returns the first value in the current window frame. Requires window ordering to be meaningful.
    LAST_VALUE Returns the last value in the current window frame. Requires window ordering to be meaningful.
*/

SELECT *
        ,Amount-LAG(Amount,1,0) OVER (PARTITION BY AccountID ORDER BY TransactionDate, TransactionID) AS DIFF
        ,SUM(Amount) OVER (PARTITION BY AccountID) AS FinalBalance
        ,SUM(Amount) OVER(
            PARTITION BY AccountID
            ORDER BY TransactionDate, TransactonID
            ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROWS -- default with order by
        ) AS CurrentBalance
FROM Transactions
WHERE AccountID=25
ORDER BY AccountID,TransactionDate,TransactionID;


/*
    Pivoting includes three phases:
    Grouping determines which element gets a row in the result set
    Spreading provides the distinct values to be pivoted across
    Aggregation performs an aggregation function (such as SUM)
*/

SELECT Category, [2006],[2007],[2008]
FROM ( SELECT Category, Qty, Orderyear
     FROM Sales.CategoryQtyYear) AS D
PIVOT(SUM(QTY) FOR orderyear
        IN([2006],[2007],[2008])
        ) AS pvt;
       
SELECT VendorID, [250] AS Emp1, [251] AS Emp2, [256] AS Emp3, [257] AS Emp4, [260] AS Emp5
FROM
(SELECT PurchaseOrderID, EmployeeID, VendorID
FROM Purchasing.PurchaseOrderHeader) p
PIVOT
(
COUNT (PurchaseOrderID)
FOR EmployeeID IN
( [250], [251], [256], [257], [260] )
) AS pvt
ORDER BY pvt.VendorID;       

-- Using UNPIVOT to normalize the table
CREATE TABLE pvt (VendorID int, Emp1 int, Emp2 int,
    Emp3 int, Emp4 int, Emp5 int);
GO
INSERT INTO pvt VALUES (1,4,3,5,4,4);
INSERT INTO pvt VALUES (2,4,1,5,5,5);
INSERT INTO pvt VALUES (3,4,3,5,4,4);
GO

SELECT VendorID, Employee, Orders
FROM
   (SELECT VendorID, Emp1, Emp2, Emp3, Emp4, Emp5
   FROM pvt) p
UNPIVOT
   (Orders FOR Employee IN
      (Emp1, Emp2, Emp3, Emp4, Emp5)
)AS unpvt;
GO       


SELECT TerritoryID, CustomerID, SUM(TotalDue) AS TotalAmountDue
FROM Sales.SalesOrderHeader
GROUP BY
GROUPING SETS((TerritoryID),(CustomerID),());

--CUBE provides shortcut for defining grouping sets given a list of columns
-- All possible combinations of grouping sets are created
SELECT TerritoryID, CustomerID, SUM(TotalDue) AS TotalAmountDue
FROM Sales.SalesOrderHeader
GROUP BY CUBE(TerritoryID, CustomerID)
ORDER BY TerritoryID, CustomerID;

-- ROLLUP provides shortcut for defining grouping sets, creates combinations assuming input columns form a hierarchy
SELECT TerritoryID, CustomerID, SUM(TotalDue) AS TotalAmountDue
FROM Sales.SalesOrderHeader
GROUP BY ROLLUP(TerritoryID, CustomerID)
ORDER BY TerritoryID, CustomerID;

-- 06 | Modifying Data in SQL Server

INSERT INTO Production.UnitMeasure (Name, UnitMeasureCode, ModifiedDate)
VALUES (N'Square Yards', N'Y2', GETDATE());
GO

INSERT INTO Production.UnitMeasure (Name, UnitMeasureCode, ModifiedDate)
VALUES
    (N'Square Feet', N‘F2', GETDATE()),
    (N'Square Inches', N‘I2', GETDATE());

INSERT INTO Production.UnitMeasure (Name, UnitMeasureCode, ModifiedDate, Country)
VALUES ((N'Square Miles', N'M2', GETDATE()); DEFAULT);

-- INSERT...SELECT is used to insert the result set of a query into an existing table
INSERT INTO Production.UnitMeasure (Name, UnitMeasureCode, ModifiedDate)
SELECT Name, UnitMeasureCode, ModifiedDate
FROM Sales.TempUnitTable
WHERE ModifiedDate < '20080101';

-- INSERT...EXEC is used to insert the result of a stored procedure or dynamic SQL expression into an existing table
INSERT INTO Production.UnitMeasure (Name, UnitMeasureCode, ModifiedDate)
EXEC Production.Temp_UOM
    @numrows = 5, @catid=1;

-- SELECT...INTO is similar to INSERT...SELECT but SELECT...INTO creates a new table each time the statement is executed
-- Copies column names, data types, and nullability Does not copy constraints or indexes
SELECT Name, UnitMeasureCode, ModifiedDate
INTO Production.TempUOMTable
FROM Production.UnitMeasure
WHERE orderdate < '20080101';

-- IDENTITY property with a starting number of 100 and incremented by 10 as each row is added
CREATE TABLE Production.IdentityProducts(
productid int IDENTITY(100,10) NOT NULL,
productname nvarchar(40) NOT NULL,
categoryid int NOT NULL,
unitprice money NOT NULL)  
   

-- Define a sequence
CREATE SEQUENCE dbo.InvoiceSeq AS INT START WITH 5 INCREMENT BY 5;
-- Retrieve next available value from sequence
SELECT NEXT VALUE FOR dbo.InvoiceSeq;

UPDATE Production.UnitMeasure
   SET ModifiedDate = (GETDATE())
   WHERE UnitMeasureCode = 'M2';
  

MERGE INTO schema_name.table_name AS TargetTbl
    USING (SELECT <select_list>) AS SourceTbl
    ON (TargetTbl.col1 = SourceTbl.col1)
    WHEN MATCHED THEN
        UPDATE SET col2 = SourceTbl.col2
WHEN NOT MATCHED THEN
    INSERT (<column_list>)
    VALUES (<value_list>);

ALTER TABLE Production.TransactionHistoryArchive
ADD CONSTRAINT PK_TransactionHistoryArchive_TransactionID
PRIMARY KEY CLUSTERED (TransactionID);

ALTER TABLE Sales.SalesOrderHeaderSalesReason
ADD CONSTRAINT FK_SalesReason
FOREIGN KEY (SalesReasonID)
REFERENCES Sales.SalesReason (SalesReasonID)
ON DELETE CASCADE
ON UPDATE CASCADE ;

CREATE TABLE Production.TransactionHistoryArchive4
(TransactionID int NOT NULL,
CONSTRAINT AK_TransactionID UNIQUE(TransactionID) );


ALTER TABLE DBO.NewTable
ADD ZipCode int NULL
CONSTRAINT CHK_ZipCode
CHECK (ZipCode LIKE '[0-9][0-9][0-9][0-9][0-9]');

ALTER TABLE Sales.CountryRegionCurrency
ADD CONSTRAINT Default_Country
DEFAULT 'USA' FOR CountryRegionCode;

CREATE TRIGGER reminder1 ON Sales.Customer
AFTER INSERT, UPDATE
AS RAISERROR ('Notify Customer Relations', 16, 10);

DELETE Sales.ShoppingCartItem OUTPUT DELETED.* WHERE ShoppingCartID = 20621;
--Verify the rows in the table matching the WHERE clause have been deleted.
SELECT COUNT(*) AS [Rows in Table]
FROM Sales.ShoppingCartItem
WHERE ShoppingCartID = 20621;

--BEGIN TRAN
    DECLARE @tmp TABLE (ProductID INT PRIMARY KEY);
   
    UPDATE Production.Product SET Name=UPPER(Name)
    OUTPUT INSERTED.ProductID INTO @tmp
    WHERE ListPrice >10;
   
    SELECT * from @tmp -- @tmp can be referenced to retrieve original modified rows

-- ROLLBACK TRAN;

-- 07 | Programming with T-SQL

--Declare,initialize, and use a variable
DECLARE @SalesPerson_id INT = 5;
SELECT OrderYear, COUNT(DISTINCT CustomerID) AS CustCount
FROM (
SELECT YEAR(OrderDate) AS OrderYear, CustomerID
FROM Sales.SalesOrderHeader
WHERE SalesPersonID = @SalesPerson_id
) AS DerivedYear
GROUP BY OrderYear;

-- Values can be assigned with a SET command or a SELECT statement
-- SET can only assign one variable at a time. SELECT can assign multiple variables at a time
-- When using SELECT to assign a value, make sure that exactly one row is returned by the query


--Declare and initialize variables
DECLARE @numrows INT = 3, @catid INT = 2;

--Use variables to pass parameters to procedure
EXEC Production.ProdsByCategory
    @numrows = @numrows, @catid = @catid;
GO


-- Create a synonym for the Product table in AdventureWorks
CREATE SYNONYM dbo.MyProduct
FOR AdventureWorks.Production.Product;
GO
-- Query the Product table by using the synonym.
SELECT ProductID, Name
FROM MyProduct
WHERE ProductID < 5;
GO

IF OBJECT_ID (‘Production.Product', 'U') IS NOT NULL
    PRINT 'I am here and contain data, so don’t delete me’;
   
IF OBJECT_ID (‘Production.Product', 'U') IS NOT NULL
    PRINT 'I am here and contain data, so don’t delete me’
ELSE
  PRINT ‘Table not found, so feel free to create one’
GO

DECLARE @BusinessEntID AS INT = 1, @Title AS NVARCHAR(50);
WHILE @BusinessEntID <=10
   BEGIN
    SELECT @Title = JobTitle FROM HumanResources.Employee
        WHERE BusinessEntityID = @BusinessEntID;
    PRINT @Title;
    SET @BusinessEntID += 1;
   END;
GO
   
BEGIN TRY
    -- Generate a divide-by-zero error.
SELECT 1/0;
END TRY
BEGIN CATCH
SELECT
         ERROR_NUMBER() AS ErrorNumber
        ,ERROR_SEVERITY() AS ErrorSeverity
        ,ERROR_STATE() AS ErrorState
        ,ERROR_PROCEDURE() AS ErrorProcedure
        ,ERROR_LINE() AS ErrorLine
        ,ERROR_MESSAGE() AS ErrorMessage;
END CATCH;
GO

BEGIN TRY
    -- Table does not exist; object name resolution error not caught.
SELECT * FROM IDontExist;
END TRY
BEGIN CATCH
    SELECT
         ERROR_NUMBER() AS ErrorNumber
        ,ERROR_MESSAGE() AS ErrorMessage;
END CATCH
GO

BEGIN TRY
    SELECT 100/0 AS 'Problem';
END TRY
BEGIN CATCH
    PRINT 'Code inside CATCH is beginning'
    PRINT 'MyError: ' + CAST(ERROR_NUMBER()
        AS VARCHAR(255));
    THROW;
END CATCH

-- Transactions extend batches
BEGIN TRY
BEGIN TRANSACTION
  INSERT INTO Sales.SalesOrderHeader... --Succeeds
  INSERT INTO Sales.SalesOrderDetail... --Fails
COMMIT TRANSACTION -- If no errors, transaction completes
END TRY
BEGIN CATCH
--Inserted rows still exist in Sales.SalesOrderHeader SELECT ERROR_NUMBER()
ROLLBACK TRANSACTION --Any transaction work undone
END CATCH;


-- SQL Server does not automatically roll back transactions when errors occur
-- To roll back, either use ROLLBACK statements in error-handling logic or enable XACT_ABORT
-- XACT_ABORT specifies whether SQL Server automatically rolls back the current transaction when a runtime error occurs
-- When SET XACT_ABORT is ON, the entire transaction is terminated and rolled back on error, unless occurring in TRY block
-- SET XACT_ABORT OFF is the default setting
-- Change XACT_ABORT value with the SET command:
SET XACT_ABORT ON;

-- 08 | Retrieving SQL Server Metadata and Improving Query Performance


--Pre-filtered to exclude system objects
SELECT  name, object_id, schema_id, type, type_desc
FROM sys.tables;

--Includes system and user objects
SELECT name, object_id, schema_id, type, type_desc
FROM sys.objects;

SELECT TABLE_CATALOG, TABLE_SCHEMA,
    TABLE_NAME, TABLE_TYPE
FROM    INFORMATION_SCHEMA.TABLES;

SELECT VIEW_CATALOG, VIEW_SCHEMA, VIEW_NAME,     TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME,     COLUMN_NAME
FROM INFORMATION_SCHEMA.VIEW_COLUMN_USAGE;

SELECT VIEW_CATALOG, VIEW_SCHEMA, VIEW_NAME,     TABLE_CATALOG, TABLE_SCHEMA, TABLE_NAME,     COLUMN_NAME
FROM INFORMATION_SCHEMA.VIEW_COLUMN_USAGE;

SELECT @@VERSION AS SQL_Version;

SELECT SERVERPROPERTY('ProductVersion') AS version;

SELECT SERVERPROPERTY('Collation') AS collation;

SELECT session_id, login_time, program_name
FROM sys.dm_exec_sessions
WHERE is_user_process = 1;


SELECT    referencing_schema_name,     referencing_entity_name,    referencing_class_desc
FROM     sys.dm_sql_referencing_entities('Sales.SalesOrderHeader', 'OBJECT');
GO

--no parameters so lists all database
EXEC sys.sp_databases;

--single parameter of name of table
EXEC sys.sp_help N'Sales.Customer';

--multiple named parameters
EXEC sys.sp_tables
    @table_name = '%',    
    @table_owner = N'Sales';


CREATE PROCEDURE Production.ProdsByProductLine
(@numrows AS int, @ProdLine AS nchar)
AS
SELECT TOP(@numrows) ProductID,
    Name, ListPrice
FROM     Production.Product
WHERE     ProductLine = @ProdLine;

--Retrieve top 50 products with product line = M
EXEC Production.ProdsByProductLine 50, ‘M’

 

Fast Mirror Resync in Real Example (new feature since Oracle 11.1)

$
0
0

SQL> select g.group_number,g.name dgname,d.path, d.repair_timer
  2  from v$asm_disk d,v$asm_diskgroup g
  3  where d.group_number=g.group_number;

GROUP_NUMBER DGNAME                         PATH                           REPAIR_TIMER
------------ ------------------------------ ------------------------------ ------------
           1 DATA                           /dev/rdsk/c2t0d0s6                        0
           1 DATA                           /dev/rdsk/c2t1d0s6                        0
           1 DATA                           /dev/rdsk/c2t2d0s6                        0

SQL> select name,value from v$asm_attribute where group_number=1
  2   and name in ('disk_repair_time','compatible.asm','compatible.rdbms');

NAME                                     VALUE
---------------------------------------- ----------------------------------------
disk_repair_time                         3.6h
compatible.asm                           12.1.0.0.0
compatible.rdbms                         10.1.0.0.0

SQL> alter diskgroup data set attribute 'compatible.rdbms'='11.2';

Diskgroup altered.

SQL> select g.group_number,g.name dgname,d.path, d.repair_timer
  2  from v$asm_disk d,v$asm_diskgroup g
  3  where d.group_number=g.group_number;

GROUP_NUMBER DGNAME                         PATH                           REPAIR_TIMER
------------ ------------------------------ ------------------------------ ------------
           1 DATA                           /dev/rdsk/c2t0d0s6                        0
           1 DATA                           /dev/rdsk/c2t1d0s6                        0
           1 DATA                           /dev/rdsk/c2t2d0s6                        0


2015-03-10 23:23:04.823000 +08:00
SQL> alter diskgroup data set attribute 'compatible.rdbms'='11.2'
NOTE: Advancing RDBMS compatibility to 11.2.0.0.0 for grp 1
GMON querying group 1 at 9 for pid 7, osid 1782
SUCCESS: Advanced compatible.rdbms to 11.2.0.0.0 for grp 1
SUCCESS: alter diskgroup data set attribute 'compatible.rdbms'='11.2'
2015-03-10 23:24:17.988000 +08:00
Warning: VKTM detected a time drift.


SQL> !chmod 000 /dev/rdsk/c2t2d0s6


SQL> select g.group_number,g.name dgname,d.path, d.repair_timer from v$asm_disk d,v$asm_diskgroup g where d.group_number=g.group_number;

GROUP_NUMBER DGNAME                         PATH                           REPAIR_TIMER
------------ ------------------------------ ------------------------------ ------------
           1 DATA                                                                 12960
           1 DATA                           /dev/rdsk/c2t0d0s6                        0
           1 DATA                           /dev/rdsk/c2t1d0s6                        0

SQL> select g.group_number,g.name dgname,d.name, d.repair_timer from v$asm_disk d,v$asm_diskgroup g where d.group_number=g.group_number;

GROUP_NUMBER DGNAME                         NAME                                     REPAIR_TIMER
------------ ------------------------------ ---------------------------------------- ------------
           1 DATA                           DATA_0002                                       12960
           1 DATA                           DATA_0000                                           0
           1 DATA                           DATA_0001                                           0
  


Tue Mar 10 23:35:53 2015
WARNING: Read Failed. group:1 disk:2 AU:0 offset:4096 size:4096
path:Unknown disk
         incarnation:0xf0f0a79d synchronous result:'I/O error'
         subsys:Unknown library krq:0xffff80ffbc948ed8 bufp:0x9c844000 osderr1:0x0 osderr2:0x0
         IO elapsed time: 0 usec Time waited on I/O: 0 usec
WARNING: cache failed reading from group=1(DATA) dsk=2 blk=1 count=1 from disk=2 (DATA_0002) mirror=0 kfkist=0x20 status=0x02 osderr=0x0 file=kfc.c line=12668
Tue Mar 10 23:35:53 2015
Errors in file /u01/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_3468.trc:
ORA-15025: could not open disk "/dev/rdsk/c2t2d0s6"
ORA-27041: unable to open file
Solaris-AMD64 Error: 13: Permission denied
Additional information: 3
ORA-15080: synchronous I/O operation failed to read block 0 of disk 2 in disk group DATA
WARNING: Read Failed. group:1 disk:2 AU:11 offset:4096 size:4096
path:Unknown disk
         incarnation:0xf0f0a79d synchronous result:'I/O error'
         subsys:Unknown library krq:0xffff80ffbc948ed8 bufp:0x9c844000 osderr1:0x0 osderr2:0x0
         IO elapsed time: 0 usec Time waited on I/O: 0 usec
WARNING: cache failed reading from group=1(DATA) dsk=2 blk=1 count=1 from disk=2 (DATA_0002) mirror=1 kfkist=0x20 status=0x02 osderr=0x0 file=kfc.c line=12668
ERROR: cache failed to read group=1(DATA) dsk=2 blk=1 from disk(s): 2(DATA_0002) 2(DATA_0002)
ORA-15080: synchronous I/O operation failed to read block 0 of disk 2 in disk group DATA
ORA-15080: synchronous I/O operation failed to read block 0 of disk 2 in disk group DATA

NOTE: cache initiating offline of disk 2 group DATA
NOTE: process _user3468_+asm (3468) initiating offline of disk 2.4042303389 (DATA_0002) with mask 0x7e in group 1 (DATA) with client assisting
NOTE: checking PST: grp = 1
Tue Mar 10 23:36:00 2015
GMON checking disk modes for group 1 at 10 for pid 27, osid 3468
Tue Mar 10 23:36:00 2015
NOTE: checking PST for grp 1 done.
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0xf0f0a79d, mask = 0x6a, op = clear
Tue Mar 10 23:36:00 2015
GMON updating disk modes for group 1 at 11 for pid 27, osid 3468
WARNING: GMON has insufficient disks to maintain consensus for group 1. Minimum required is 2: updating 2 PST copies from a total of 3.
NOTE: group DATA: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATA: updated PST location: disk 0001 (PST copy 1)
Tue Mar 10 23:36:01 2015
NOTE: PST update grp = 1 completed successfully
NOTE: ospid 3468 initiating cluster wide offline of disk 2 in group 1
Tue Mar 10 23:36:01 2015
NOTE: disk 2 (DATA_0002) in group 1 (DATA) is locally offline for writes
Tue Mar 10 23:36:01 2015
SUCCESS: extent 1 of file 1 group 1 repaired by relocating to a different AU on the same disk or the disk is offline
Tue Mar 10 23:36:02 2015
NOTE: process _b000_+asm (3487) initiating offline of disk 2.4042303389 (DATA_0002) with mask 0x7e in group 1 (DATA) without client assisting
Tue Mar 10 23:36:02 2015
NOTE: sending set offline flag message (3705766140) to 1 disk(s) in group 1
Tue Mar 10 23:36:02 2015
WARNING: Disk 2 (DATA_0002) in group 1 mode 0x1 is now being offlined
Tue Mar 10 23:36:02 2015
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0xf0f0a79d, mask = 0x6a, op = clear
Tue Mar 10 23:36:02 2015
GMON updating disk modes for group 1 at 12 for pid 28, osid 3487
Tue Mar 10 23:36:02 2015
NOTE: PST update grp = 1 completed successfully
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0xf0f0a79d, mask = 0x7e, op = clear
Tue Mar 10 23:36:02 2015
GMON updating disk modes for group 1 at 13 for pid 28, osid 3487
NOTE: group DATA: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATA: updated PST location: disk 0001 (PST copy 1)
Tue Mar 10 23:36:03 2015
NOTE: cache closing disk 2 of grp 1: DATA_0002
Tue Mar 10 23:36:03 2015
NOTE: PST update grp = 1 completed successfully
Tue Mar 10 23:36:03 2015
NOTE: cache closing disk 2 of grp 1: (not open) DATA_0002
Tue Mar 10 23:36:42 2015
WARNING: Started Drop Disk Timeout for Disk 2 (DATA_0002) in group 1 with a value 12960
Tue Mar 10 23:37:51 2015
NOTE: cache closing disk 2 of grp 1: (not open) DATA_0002
Tue Mar 10 23:37:51 2015
NOTE: cache closing disk 2 of grp 1: (not open) DATA_0002

SQL> !chmod 660 /dev/rdsk/c2t2d0s6

SQL> alter diskgroup data online disk 'DATA_0002';

Diskgroup altered.

SQL> select g.group_number,g.name dgname,d.name, d.repair_timer from v$asm_disk d,v$asm_diskgroup g where d.group_number=g.group_number;

GROUP_NUMBER DGNAME                         NAME                           REPAIR_TIMER
------------ ------------------------------ ------------------------------ ------------
           1 DATA                           DATA_0000                                 0
           1 DATA                           DATA_0001                                 0
           1 DATA                           DATA_0002                             12408

SQL> col path for a40
SQL> select g.group_number,g.name dgname,d.path, d.repair_timer from v$asm_disk d,v$asm_diskgroup g where d.group_number=g.group_number;

GROUP_NUMBER DGNAME                         PATH                                     REPAIR_TIMER
------------ ------------------------------ ---------------------------------------- ------------
           1 DATA                           /dev/rdsk/c2t0d0s6                                  0
           1 DATA                           /dev/rdsk/c2t1d0s6                                  0
           1 DATA                           /dev/rdsk/c2t2d0s6                                  0

SQL> select g.group_number,g.name dgname,d.name, d.repair_timer from v$asm_disk d,v$asm_diskgroup g where d.group_number=g.group_number;

GROUP_NUMBER DGNAME                         NAME                           REPAIR_TIMER
------------ ------------------------------ ------------------------------ ------------
           1 DATA                           DATA_0000                                 0
           1 DATA                           DATA_0001                                 0
           1 DATA                           DATA_0002                                 0


SQL> alter diskgroup data online disk 'DATA_0002'
Tue Mar 10 23:48:54 2015
NOTE: cache closing disk 2 of grp 1: (not open) DATA_0002
Tue Mar 10 23:48:54 2015
NOTE: client +ASM:+ASM:ASM dismounting group 1 (DATA)
Tue Mar 10 23:48:54 2015
NOTE: initiating resync of disk group 1 disks
DATA_0002 (2)

NOTE: process _user3694_+asm (3694) initiating offline of disk 2.4042303389 (DATA_0002) with mask 0x7e in group 1 (DATA) without client assisting
Tue Mar 10 23:48:54 2015
NOTE: sending set offline flag message (2531577311) to 1 disk(s) in group 1
Tue Mar 10 23:48:54 2015
WARNING: Disk 2 (DATA_0002) in group 1 mode 0x1 is now being offlined
Tue Mar 10 23:48:54 2015
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0xf0f0a79d, mask = 0x6a, op = clear
Tue Mar 10 23:48:54 2015
GMON updating disk modes for group 1 at 21 for pid 7, osid 3694
Tue Mar 10 23:48:54 2015
NOTE: cache closing disk 2 of grp 1: (not open) DATA_0002
Tue Mar 10 23:48:54 2015
NOTE: PST update grp = 1 completed successfully
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0xf0f0a79d, mask = 0x7e, op = clear
Tue Mar 10 23:48:54 2015
GMON updating disk modes for group 1 at 22 for pid 7, osid 3694
Tue Mar 10 23:48:54 2015
NOTE: cache closing disk 2 of grp 1: (not open) DATA_0002
Tue Mar 10 23:48:54 2015
NOTE: PST update grp = 1 completed successfully
NOTE: requesting all-instance membership refresh for group=1
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0x0, mask = 0x11, op = assign
Tue Mar 10 23:48:54 2015
GMON updating disk modes for group 1 at 23 for pid 7, osid 3694
Tue Mar 10 23:48:54 2015
NOTE: cache closing disk 2 of grp 1: (not open) DATA_0002
NOTE: group DATA: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATA: updated PST location: disk 0001 (PST copy 1)
Tue Mar 10 23:48:54 2015
NOTE: PST update grp = 1 completed successfully
NOTE: requesting all-instance disk validation for group=1
Tue Mar 10 23:48:54 2015
NOTE: disk validation pending for 1 disk in group 1/0xb6705766 (DATA)
NOTE: Found /dev/rdsk/c2t2d0s6 for disk DATA_0002
NOTE: completed disk validation for 1/0xb6705766 (DATA)
Tue Mar 10 23:48:56 2015
NOTE: discarding redo for group 1 disk 2
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0x0, mask = 0x19, op = assign
Tue Mar 10 23:48:56 2015
GMON updating disk modes for group 1 at 24 for pid 7, osid 3694
NOTE: group DATA: updated PST location: disk 0000 (PST copy 0)
NOTE: group DATA: updated PST location: disk 0001 (PST copy 1)
NOTE: group DATA: updated PST location: disk 0002 (PST copy 2)
Tue Mar 10 23:48:56 2015
NOTE: PST update grp = 1 completed successfully
Tue Mar 10 23:48:56 2015
NOTE: membership refresh pending for group 1/0xb6705766 (DATA)
Tue Mar 10 23:48:56 2015
GMON querying group 1 at 25 for pid 16, osid 1718
NOTE: cache opening disk 2 of grp 1: DATA_0002 path:/dev/rdsk/c2t2d0s6
Tue Mar 10 23:48:56 2015
SUCCESS: refreshed membership for 1/0xb6705766 (DATA)
Tue Mar 10 23:48:56 2015
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0x0, mask = 0x5d, op = assign
Tue Mar 10 23:48:56 2015
GMON updating disk modes for group 1 at 26 for pid 7, osid 3694
Tue Mar 10 23:48:56 2015
NOTE: PST update grp = 1 completed successfully
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0x0, mask = 0x7d, op = assign
Tue Mar 10 23:48:57 2015
GMON updating disk modes for group 1 at 27 for pid 7, osid 3694
Tue Mar 10 23:48:57 2015
NOTE: PST update grp = 1 completed successfully
Tue Mar 10 23:48:57 2015
SUCCESS: alter diskgroup data online disk 'DATA_0002'
Tue Mar 10 23:49:00 2015
NOTE: Attempting voting file refresh on diskgroup DATA
Tue Mar 10 23:49:00 2015
NOTE: starting rebalance of group 1/0xb6705766 (DATA) at power 1
Starting background process ARB0
Tue Mar 10 23:49:00 2015
ARB0 started with pid=24, OS id=3699
NOTE: assigning ARB0 to group 1/0xb6705766 (DATA) with 1 parallel I/O
Tue Mar 10 23:49:01 2015
NOTE: header on disk 2 advanced to format #2 using fcn 0.0
NOTE: F1X0 on disk 2 (fmt 2) relocated at fcn 0.3058: AU 0 -> AU 10
NOTE: header on disk 0 advanced to format #2 using fcn 0.730
NOTE: F1B1 fcn on disk 0 synced at fcn 0.3058
NOTE: header on disk 1 advanced to format #2 using fcn 0.730
NOTE: F1B1 fcn on disk 1 synced at fcn 0.3058
Tue Mar 10 23:49:53 2015
NOTE: initiating PST update: grp 1 (DATA), dsk = 2/0x0, mask = 0x7f, op = assign
Tue Mar 10 23:49:53 2015
GMON updating disk modes for group 1 at 28 for pid 24, osid 3699
Tue Mar 10 23:49:53 2015
NOTE: PST update grp = 1 completed successfully
NOTE: reset timers for disk: 2
NOTE: completed online of disk group 1 disks
DATA_0002 (2)

Tue Mar 10 23:49:53 2015
NOTE: stopping process ARB0
Tue Mar 10 23:49:53 2015
NOTE: requesting all-instance membership refresh for group=1
Tue Mar 10 23:49:53 2015
SUCCESS: rebalance completed for group 1/0xb6705766 (DATA)
NOTE: membership refresh pending for group 1/0xb6705766 (DATA)
Tue Mar 10 23:49:53 2015
GMON querying group 1 at 29 for pid 16, osid 1718
Tue Mar 10 23:49:53 2015
SUCCESS: refreshed membership for 1/0xb6705766 (DATA)
NOTE: Attempting voting file refresh on diskgroup DATA

Schedule a RMAN script using Oracle Scheduler on Windows Platform

$
0
0

 

Step 1: Create rman script file: (c:\rman\rman_validate.rcv in this example)

backup validate check logical database;

Step2: Create rman batch scripts: (c:\rman\rman_validate.bat in this example)

set ORACLE_HOME=C:\Oracle\product\11.2.0\dbhome_1
set ORACLE_SID=ORCL112
set NLS_DATE_FORMAT="YYYY-MON-DD HH24:MI:SS"

%ORACLE_HOME%\bin\rman target / log c:\rman\rman_validate.log cmdfile c:\rman\rman_validate.rcv

exit 0

Step 3: Schedule it  (Both Version 1 and Version 2 are working)
Version 1:

begin
    dbms_scheduler.drop_job (
        job_name    => 'DATABASE_VALIDATION_VIA_RMAN');
end;
/
       

begin
    dbms_scheduler.create_job(
        job_name        => 'DATABASE_VALIDATION_VIA_RMAN',
        job_type        => 'EXECUTABLE',
        job_action        => 'c:\rman\rman_validate.bat',
        start_date        => trunc(systimestamp)+4/24,
        repeat_interval        => 'FREQ=DAILY;BYHOUR=4;BYMINUTE=0',
        enabled            => false,
        comments        => 'Database validation job via RMAN validate command');
end;
/
           

begin
    dbms_scheduler.run_job(job_name=>'DATABASE_VALIDATION_VIA_RMAN',USE_CURRENT_SESSION=>true); -- true is default
end;
/

Version 2:

begin
    dbms_scheduler.drop_job (
        job_name    => 'DATABASE_VALIDATION_VIA_RMAN');
end;
/
       

begin
    dbms_scheduler.create_job(
        job_name        => 'DATABASE_VALIDATION_VIA_RMAN',
        job_type        => 'EXECUTABLE',
        job_action        => 'C:\WINDOWS\SYSTEM32\CMD.EXE',
        number_of_arguments    =>3,
        start_date        => trunc(systimestamp)+4/24,
        repeat_interval        => 'FREQ=DAILY;BYHOUR=4;BYMINUTE=0',
        enabled            => false,
        comments        => 'Database validation job via RMAN validate command');

    dbms_scheduler.set_job_argument_value('DATABASE_VALIDATION_VIA_RMAN',1,'/q');
    dbms_scheduler.set_job_argument_value('DATABASE_VALIDATION_VIA_RMAN',2,'/c');
    dbms_scheduler.set_job_argument_value('DATABASE_VALIDATION_VIA_RMAN',3,'c:\rman\rman_validate.bat');

    dbms_scheduler.enable('DATABASE_VALIDATION_VIA_RMAN');
end;
/
           

begin
    dbms_scheduler.run_job(job_name=>'DATABASE_VALIDATION_VIA_RMAN',USE_CURRENT_SESSION=>true); -- true is default
end;
/

 

Suggestions for Windows Platforms:

 

•The OracleJobScheduler Windows Service must be started before external jobs will run.
  (except for jobs in the SYS schema and jobs with credentials).
•The user that the OracleJobScheduler Windows Service runs as must have the "Log on as batch job" Windows privilege.
•A batch file (ending in .bat) cannot be called directly by the Scheduler. Instead a cmd.exe must be used and the name of the batch file passed in as an argument.

SQL Server T-SQL examples for Backup

$
0
0

USE [master]
GO
EXEC master.dbo.sp_addumpdevice 
@devtype = N'disk', @logicalname = N'BackupStore', @physicalname = N'C:\MSSQL12.PROD1\MSSQL\Backup\BackupStore.bak'
GO


BACKUP DATABASE [NORTHWND] TO  DISK = N'C:\Northwind.bak' WITH NOFORMAT, INIT, 
NAME = N'NORTHWND-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

BACKUP DATABASE [NORTHWND] TO  DISK = N'C:\Northwind.bak' WITH  RETAINDAYS = 2, -- EXPIREDATE = N'03/14/2015 00:00:00'
    FORMAT, INIT, 
NAME = N'NORTHWND-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, COMPRESSION,  STATS = 10, CONTINUE_AFTER_ERROR
GO


BACKUP DATABASE [NORTHWND] TO [BackupStore] WITH NOFORMAT, NOINIT, 
NAME = N'NORTHWND-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

BACKUP DATABASE [NORTHWND] TO  [BackupStore] WITH  DIFFERENTIAL , NOFORMAT, NOINIT, 
NAME = N'NORTHWND-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO


BACKUP LOG [NORTHWND] TO  [BackupStore] WITH NOFORMAT, NOINIT, 
NAME = N'NORTHWND-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10, CHECKSUM
GO

BACKUP LOG [NORTHWND] TO  [BackupStore] WITH NOFORMAT, NOINIT,  NAME = N'NORTHWND-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO
declare @backupSetId as int
select @backupSetId = position from msdb..backupset where database_name=N'NORTHWND' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'NORTHWND' )
if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''NORTHWND'' not found.', 16, 1) end
RESTORE VERIFYONLY FROM  [BackupStore] WITH  FILE = @backupSetId,  NOUNLOAD,  NOREWIND
GO

-- Backup Tail of Translog
BACKUP LOG [NORTHWND] TO  [BackupStore] WITH  NO_TRUNCATE , NOFORMAT, NOINIT, 
NAME = N'NORTHWND-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  NORECOVERY , COMPRESSION,  STATS = 10
GO

/*
Msg 33101, Level 16, State 1, Line 51
Cannot use certificate 'DatabaseSecureBackup', because its private key is not present or it is not protected by the database master key. SQL Server requires the ability to automatically access the private key of the certificate used for this operation.
Msg 3013, Level 16, State 1, Line 51
BACKUP DATABASE is terminating abnormally.
USE [Master];
CREATE CERTIFICATE DatabaseSecureBackup
   ENCRYPTION BY PASSWORD = 'pGFD4bb925DGvbd2439587y'
   WITH SUBJECT = 'Database Secure Backup',
   EXPIRY_DATE = '20181031';
GO
*/


USE master;
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<UseStrongPasswordHere>';
go
CREATE CERTIFICATE DatabaseSecureBackup2 WITH SUBJECT = 'Database Secure Backup2';
go

BACKUP DATABASE [NORTHWND] TO  DISK = N'C:\Northwind.bak'
WITH FORMAT, INIT,  MEDIANAME = N'New Secure Media',  NAME = N'NORTHWND-Full Database Backup',
SKIP, NOREWIND, NOUNLOAD, ENCRYPTION(ALGORITHM = AES_256, SERVER CERTIFICATE = [DatabaseSecureBackup2]),  STATS = 10
GO

/*
Warning: The certificate used for encrypting the database encryption key has not been backed up. You should immediately back up the certificate and the private key associated with the certificate. If the certificate ever becomes unavailable or if you must restore or attach the database on another server, you must have backups of both the certificate and the private key or you will not be able to open the database.
11 percent processed.
20 percent processed.
30 percent processed.
40 percent processed.
51 percent processed.
61 percent processed.
70 percent processed.
80 percent processed.
90 percent processed.
Processed 576 pages for database 'NORTHWND', file 'Northwind' on file 1.
100 percent processed.
Processed 1 pages for database 'NORTHWND', file 'Northwind_log' on file 1.
BACKUP DATABASE successfully processed 577 pages in 0.845 seconds (5.327 MB/sec).
*/


VMware Transport Modes: Best practices and troubleshooting

$
0
0
 

Encountered issues restoring via “SAN” failed during the boot up, error symptom is “operating system not found”, tried again with “NBD”, it works.

http://www.symantec.com/business/support/index?page=content&id=tech183072

 

Issue

A VMware Backup Host can access Virtual Machine data from datastores using four different methods – SAN, LAN(NBD), HotAdd, NBDSSL. These methods are referred to as VMware Transport modes. This article talks about these transport modes, the best practices around them and troubleshooting tips for some commonly seen errors related to transport modes in NetBackup and Backup Exec.

Solution

For both Backup and Restore operations, NetBackup and Backup Exec allow choosing any of the four transport modes or a combination of these. If a combination of the transport modes is given, NetBackup and Backup Exec will try all of them one by one until gaining successful access to the data of the Virtual Machine.

Details on each of the transport modes

1. SAN:  The SAN transport mode requires the VMware Backup Host to reside on a physical machine with access to Fibre Channel or iSCSI SAN containing the virtual disks to be accessed. This is an efficient data path because no data needs to be transferred through the production ESX/ESXi host.

In this mode, vStorage APIs obtain information from the vCenter server or ESX/ESXi host about the layout of VMFS LUNs and, using this information, reads data directly from the SAN or iSCSI LUN where the VMDK resides.

Best practices around SAN:

  • For using SAN, make sure that datastore LUNs are accessible to the VMware Backup Host.
  • SAN transport is usually the best choice for backups when running on a physical VMware Backup Host. However, it is disabled inside virtual machines, so use HotAdd instead on a virtual VMware Backup Host.
  • SAN transport is not always the best choice for restores. It offers the best performance on thick disks, but the worst performance on thin disks, because of the way vStorage APIs work. For thin disk restore, LAN(NBD) is faster.
  • For SAN restores, disk size should be a multiple of the underlying VMFS block size, otherwise the write to the last fraction of a disk will fail. For example, if virtual disk has a 1MB block size and the datastore is 16.3MB large, the last 0.3MB will not get written. The workaround in this case would be to use NBD for restores of such Virtual Machines.
  • When using SAN transport or hot-add mode on a Windows Server 2008/2008 R2 VMware Backup Host, make sure to set:
    • SAN policy to onlineAll
    • SAN disk as read-only, except during restores

2. LAN (NBD): In this mode, the ESX/ESXi host reads data from storage and sends it across a network to the VMware Backup Host. As its name implies, this transport mode is not LAN‐free, unlike SAN transport.

LAN transport offers the following advantages:

  • The ESX/ESXi host can use any storage device, including local storage or NAS.
  • The VMware Backup server could be a virtual machine, so you can use a resource pool and scheduling capabilities of VMware vSphere to minimize the performance impact of backup. For example, you can put the VMware Backup Host in a different resource pool than the production ESX/ESXi hosts, with lower priority for backup.
  • If the ESX/ESXi host and VMware Backup Host are on a private network, you can use unencrypted data transfer,which is faster and consumes fewer resources than NBDSSL. If you need to protect sensitive information, you have the option of transferring virtual machine data in an encrypted form using NBDSSL.

Best Practices when using LAN:

  • Since the data in this case is read by ESX/ESXi server from storage and then sent to VMware Backup Host, It is must to have network connectivity between ESX/ESXi server and VMware Backup Host. If the VMware Backup Host has connectivity to vCenter server but not the ESX/ESXi server- snapshots will succeed but vmdk read/write operations will fail.
  • The VMware Backup Host will need the ability to connect to TCP port 902 on ESX/ESXi hosts while using NBD/NBDSSL for backup/restores.
  • VMware uses Network File Copy (NFC) protocol to read VMDK using NBD transport mode. You need one NFC connection for each VMDK file being backed up. There is a limit on the number of NFC connections that can be made per ESX/vCenter server. These limits differ in different versions of vSphere - please refer to the NetBackup for VMware Admin Guide(linked below) for these limits. Backup/Restore operations using NBD might hang if this limit is reached.

3. HotAdd: When running VMware Backup Host on a Virtual Machine, vStorage APIs can take advantage of the SCSI Hot-add capability of the ESX/ESXi server to attach the VMDKs of a Virtual Machine being backed up to the VMware Backup Host. This is referred to as HotAdd transport mode.

Running the VMware Backup server on a virtual machine has two advantages: it is easy to move a virtual machine around and it can also back up local storage without using the LAN, although this incurs more overhead on the physical ESX/ESXi host than when using SAN transport mode.

Best practices when using HotAdd:

  • HotAdd works only with virtual machines with SCSI disks and is not supported for backing up virtual machines with IDE disks.
  • A single SCSI controller can have a maximum of 15 disks attached. To run multiple concurrent jobs totally more than 15 disks it is necessary to add more SCSI controllers to the HotAdd host.  The maximum number of 4 SCSI controllers can be added to a HotAdd host, so a total of 60 devices are supported at the maximum.
  • HotAdd requires the VMware Backup Host to have access to datastores where the Virtual Machine being backed up resides. This essentially means:
    • ESX where the VMware backup host is running should have access to datastores where the Virtual Machine being backed up resides. 
    • Both the VMware backup host and Virtual Machine being backed up should be under the same datacenter.
  • HotAdd cannot be used if the VMFS block size of the datastore containing the virtual machine folder for the target virtual machine does not match the VMFS block size of the datastore containing the VMware Backup Host virtual machine. For example, if you back up virtual disk on a datastore with 1MB blocks, the VMware Backup Host must also be on a datastore with 1MB blocks.
  • Restores using HotAdd on a Windows Server 2008 proxy require setting the SAN policy toonlineAll
  • If you are converting a physical machine to a virtual machine with the intention of using hottadd to back up the virtual machine, do not use IDE controllers for any disks that are used during the conversion process.
  • The VMware Backup Host will need the ability to connect to TCP port 902 on ESX/ESXi hosts while using HotAdd for backup/restores.

4. NBDSSL:  NBDSSL is the same as NBD, except that NBDSSL uses SSL to encrypt all data passed over the TCP/IP connection.

Troubleshooting for some common transport mode related failures

Backups/Restores failing with status 6 or status 13 or status 11 with following indication in Activity monitor might indicate that there is some issue with transport modes:-

  • ERR - Error opening the snapshot disks using given transport mode: Status 23 indicates that there was some problem in accessing the vmdk using given transport mode.
    Here are some tips on handling this kind of error:
    • If you are using NBD, make sure the VMware Backup Host has connectivity to ESX server hosting the virtual machine.
    • If you are using SAN, please make sure that the datastore LUNs are accessible to VMware Backup Host.
    • If you are using HotAdd, please make sure that your backup host is Virtual Machine and following conditions are satisfied:
      • The VM should not contain IDE disks.
      • Ensure that there are sufficient SCSI controllers attached on the Backup Host VM.
      • The Backup Host VM has access to datastores where VM being backed up resides.
      • The Backup Host VM and VM being backed up should be under the same datacenter.
      • If the previous backup failed, it might have left some disks of the backup VM attached to Backup Host. These disks need to be manually removed before attempting the next backup.
    • If a non-default port for vCenter is in use, then that port needs to be defined while adding vCenter credentials to NetBackup or Backup Exec.
    • If using NBD/NBDSSL/HotAdd, please make sure the VMware Backup Host is able to communicate to port 902 of ESX server hosting the VM.
  • file read failed indicates that there might be problem in reading the VMDK using the given transport mode. 
  • file write failed indicates that there might be some problem in writing to the VMDK using the given transport mode.
    • If using SAN for restores, please make sure datastore LUNs are accessible to the VMware Backup Host and in an online state.
    • If using HotAdd for restore, please make sure that SAN policy on the Backup Host is set to OnlineAll.
    • If using SAN for restore, make sure that the size of VMDK is multiple of datastore block size.  Otherwise, the write of the last block will fail.  In this case, a workaround would be to use NBD for restore.
    • Please make sure that the you assign necessary privileges to the user configured in NetBackup or Backup Exec to log on to vSphere.

Installing the Microsoft ODBC Driver for SQL Server on Linux

$
0
0

http://www.microsoft.com/en-us/download/details.aspx?id=36437
https://msdn.microsoft.com/library/hh568451(SQL.110).aspx

[root@localhost msodbcsql-11.0.2270.0]# uname -a
Linux localhost.localdomain 3.8.13-44.1.5.el7uek.x86_64 #2 SMP Wed Nov 12 12:55:08 PST 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost msodbcsql-11.0.2270.0]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)
[root@localhost msodbcsql-11.0.2270.0]# cat /etc/oracle-release
Oracle Linux Server release 7.0
[root@localhost msodbcsql-11.0.2270.0]# yum install glibc libgcc libstdc++ krb5-libs openssl libuuid

INSTALLING THE DRIVER MANAGER


[root@localhost ~]# tar zxvf msodbcsql-11.0.2270.0.tar.gz
msodbcsql-11.0.2270.0/
msodbcsql-11.0.2270.0/include/
msodbcsql-11.0.2270.0/include/msodbcsql.h
msodbcsql-11.0.2270.0/bin/
msodbcsql-11.0.2270.0/bin/SQLCMD.rll
msodbcsql-11.0.2270.0/bin/BatchParserGrammar.dfa
msodbcsql-11.0.2270.0/bin/BatchParserGrammar.llr
msodbcsql-11.0.2270.0/bin/bcp.rll
msodbcsql-11.0.2270.0/bin/bcp-11.0.2270.0
msodbcsql-11.0.2270.0/bin/sqlcmd-11.0.2270.0
msodbcsql-11.0.2270.0/WARNING
msodbcsql-11.0.2270.0/build_dm.sh
msodbcsql-11.0.2270.0/lib64/
msodbcsql-11.0.2270.0/lib64/msodbcsqlr11.rll
msodbcsql-11.0.2270.0/lib64/libmsodbcsql-11.0.so.2270.0
msodbcsql-11.0.2270.0/install.sh
msodbcsql-11.0.2270.0/LICENSE
msodbcsql-11.0.2270.0/README
msodbcsql-11.0.2270.0/docs/
msodbcsql-11.0.2270.0/docs/en_US.tar.gz
[root@localhost ~]#
[root@localhost ~]# cd msodbcsql-11.0.2270.0
[root@localhost msodbcsql-11.0.2270.0]# ls
bin  build_dm.sh  docs  include  install.sh  lib64  LICENSE  README  WARNING
[root@localhost msodbcsql-11.0.2270.0]# ls -l
total 72
drwxrwxr-x. 2 root root  4096 Jan 15  2013 bin
-rwxr-xr-x. 1 root root 10001 Jan 15  2013 build_dm.sh
drwxrwxr-x. 2 root root    25 Jan 15  2013 docs
drwxrwxr-x. 2 root root    24 Jan 15  2013 include
-rwxr-xr-x. 1 root root 23323 Jan 15  2013 install.sh
drwxrwxr-x. 2 root root    63 Jan 15  2013 lib64
-rw-r--r--. 1 root root 17327 Jan 15  2013 LICENSE
-rw-r--r--. 1 root root  7103 Jan 15  2013 README
-rw-r--r--. 1 root root  1105 Jan 15  2013 WARNING
[root@localhost msodbcsql-11.0.2270.0]# ./build_dm.sh --help

Build unixODBC 2.3.0 DriverManager script
Copyright Microsoft Corp.

Usage: build_dm.sh [options]

This script downloads, configures, and builds unixODBC 2.3.0 DriverManager so that it is
ready to install for use with the Microsoft SQL Server ODBC Driver V1.0 for Linux

Valid options are --help, --download-url, --prefix, --libdir, --sysconfdir
  --help - prints this message
  --download-url=url | file:// - Specify the location (and name) of unixODBC-2.3.0.tar.gz.
       For example, if unixODBC-2.3.0.tar.gz is in the current directory, specify
       --download-url=file://unixODBC-2.3.0.tar.gz.
  --prefix - directory to install unixODBC-2.3.0.tar.gz to.
  --libdir - directory where ODBC drivers will be placed
  --sysconfdir - directory where unixODBC 2.3.0 DriverManager configuration files are placed

[root@localhost msodbcsql-11.0.2270.0]# ./build_dm.sh

Build unixODBC 2.3.0 DriverManager script
Copyright Microsoft Corp.

In order to use the Microsoft ODBC Driver 11 for SQL Server on Linux,
the unixODBC DriverManager must be installed on your computer.  unixODBC
DriverManager is a third-party tool made available by the unixODBC Project.
To assist you in the installation process, this script will attempt to
download, properly configure, and build the unixODBC DriverManager from
http://www.unixodbc.org/ for use with the Microsoft ODBC Driver 11
for SQL Server ODBC Driver on Linux.

Alternatively, you can choose to download and configure unixODBC
DriverManager from
http://www.unixodbc.org/ yourself.

Note: unixODBC DriverManager is licensed to you under the terms of an
agreement between you and the unixODBC Project, not Microsoft.  Microsoft
does not guarantee the unixODBC DriverManager or grant any rights to
you.  Prior to downloading, you should review the license for unixODBC
DriverManager at
http://www.unixodbc.org/.

The script is provided as a convenience to you as-is, without any express
or implied warranties of any kind.  Microsoft is not liable for any issues
arising out of your use of the script.

Enter 'YES' to have this script continue: YES

Verifying processor and operating system ................................... OK
Verifying wget is installed ................................................ OK
Verifying tar is installed ................................................. OK
Verifying make is installed ................................................ OK
Downloading unixODBC 2.3.0 DriverManager ................................... OK
Unpacking unixODBC 2.3.0 DriverManager ..................................... OK
Configuring unixODBC 2.3.0 DriverManager ................................... OK
Building unixODBC 2.3.0 DriverManager ...................................... OK
Build of the unixODBC 2.3.0 DriverManager complete.

Run the command 'cd /tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0; make install' to install the driver manager.

PLEASE NOTE THAT THIS WILL POTENTIALLY INSTALL THE NEW DRIVER MANAGER OVER ANY
EXISTING UNIXODBC DRIVER MANAGER.  IF YOU HAVE ANOTHER COPY OF UNIXODBC INSTALLED,
THIS MAY POTENTIALLY OVERWRITE THAT COPY.


[root@localhost msodbcsql-11.0.2270.0]# cd /tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0; make install
Making install in extras
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/extras'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/extras'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/extras'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/extras'
Making install in log
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/log'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/log'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/log'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/log'
Making install in lst
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/lst'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/lst'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/lst'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/lst'
Making install in ini
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/ini'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/ini'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/ini'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/ini'
Making install in libltdl
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/libltdl'
make  install-am
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/libltdl'
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/libltdl'
test -z "/usr/lib64" || /usr/bin/mkdir -p "/usr/lib64"
test -z "/usr/include" || /usr/bin/mkdir -p "/usr/include"
test -z "" || /usr/bin/mkdir -p ""
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/libltdl'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/libltdl'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/libltdl'
Making install in odbcinst
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/odbcinst'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/odbcinst'
test -z "/usr/lib64" || /usr/bin/mkdir -p "/usr/lib64"
/bin/sh ../libtool   --mode=install /usr/bin/install -c   libodbcinst.la '/usr/lib64'
libtool: install: /usr/bin/install -c .libs/libodbcinst.so.1.0.0 /usr/lib64/libodbcinst.so.1.0.0
libtool: install: (cd /usr/lib64 && { ln -s -f libodbcinst.so.1.0.0 libodbcinst.so.1 || { rm -f libodbcinst.so.1 && ln -s libodbcinst.so.1.0.0 libodbcinst.so.1; }; })
libtool: install: (cd /usr/lib64 && { ln -s -f libodbcinst.so.1.0.0 libodbcinst.so || { rm -f libodbcinst.so && ln -s libodbcinst.so.1.0.0 libodbcinst.so; }; })
libtool: install: /usr/bin/install -c .libs/libodbcinst.lai /usr/lib64/libodbcinst.la
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig -n /usr/lib64
----------------------------------------------------------------------
Libraries have been installed in:
   /usr/lib64

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the `LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the `LD_RUN_PATH' environment variable
     during linking
   - use the `-Wl,-rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to `/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
test -z "/etc" || /usr/bin/mkdir -p "/etc"
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/odbcinst'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/odbcinst'
Making install in DriverManager
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DriverManager'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DriverManager'
test -z "/usr/lib64" || /usr/bin/mkdir -p "/usr/lib64"
/bin/sh ../libtool   --mode=install /usr/bin/install -c   libodbc.la '/usr/lib64'
libtool: install: /usr/bin/install -c .libs/libodbc.so.1.0.0 /usr/lib64/libodbc.so.1.0.0
libtool: install: (cd /usr/lib64 && { ln -s -f libodbc.so.1.0.0 libodbc.so.1 || { rm -f libodbc.so.1 && ln -s libodbc.so.1.0.0 libodbc.so.1; }; })
libtool: install: (cd /usr/lib64 && { ln -s -f libodbc.so.1.0.0 libodbc.so || { rm -f libodbc.so && ln -s libodbc.so.1.0.0 libodbc.so; }; })
libtool: install: /usr/bin/install -c .libs/libodbc.lai /usr/lib64/libodbc.la
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig -n /usr/lib64
----------------------------------------------------------------------
Libraries have been installed in:
   /usr/lib64

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the `LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the `LD_RUN_PATH' environment variable
     during linking
   - use the `-Wl,-rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to `/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DriverManager'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DriverManager'
Making install in exe
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/exe'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/exe'
test -z "/usr/bin" || /usr/bin/mkdir -p "/usr/bin"
  /bin/sh ../libtool   --mode=install /usr/bin/install -c isql dltest odbcinst iusql odbc_config '/usr/bin'
libtool: install: /usr/bin/install -c .libs/isql /usr/bin/isql
libtool: install: /usr/bin/install -c dltest /usr/bin/dltest
libtool: install: /usr/bin/install -c .libs/odbcinst /usr/bin/odbcinst
libtool: install: /usr/bin/install -c .libs/iusql /usr/bin/iusql
libtool: install: /usr/bin/install -c odbc_config /usr/bin/odbc_config
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/exe'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/exe'
Making install in cur
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/cur'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/cur'
test -z "/usr/lib64" || /usr/bin/mkdir -p "/usr/lib64"
/bin/sh ../libtool   --mode=install /usr/bin/install -c   libodbccr.la '/usr/lib64'
libtool: install: /usr/bin/install -c .libs/libodbccr.so.1.0.0 /usr/lib64/libodbccr.so.1.0.0
libtool: install: (cd /usr/lib64 && { ln -s -f libodbccr.so.1.0.0 libodbccr.so.1 || { rm -f libodbccr.so.1 && ln -s libodbccr.so.1.0.0 libodbccr.so.1; }; })
libtool: install: (cd /usr/lib64 && { ln -s -f libodbccr.so.1.0.0 libodbccr.so || { rm -f libodbccr.so && ln -s libodbccr.so.1.0.0 libodbccr.so; }; })
libtool: install: /usr/bin/install -c .libs/libodbccr.lai /usr/lib64/libodbccr.la
libtool: finish: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/sbin" ldconfig -n /usr/lib64
----------------------------------------------------------------------
Libraries have been installed in:
   /usr/lib64

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the `-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the `LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the `LD_RUN_PATH' environment variable
     during linking
   - use the `-Wl,-rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to `/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/cur'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/cur'
Making install in DRVConfig
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DRVConfig'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DRVConfig'
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DRVConfig'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DRVConfig'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DRVConfig'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/DRVConfig'
Making install in Drivers
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/Drivers'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/Drivers'
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/Drivers'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/Drivers'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/Drivers'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/Drivers'
Making install in include
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/include'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/include'
make[2]: Nothing to be done for `install-exec-am'.
test -z "/usr/include" || /usr/bin/mkdir -p "/usr/include"
/usr/bin/install -c -m 644 odbcinst.h odbcinstext.h sql.h sqlext.h sqltypes.h sqlucode.h uodbc_stats.h uodbc_extras.h '/usr/include'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/include'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/include'
Making install in doc
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc'
Making install in AdministratorManual
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/AdministratorManual'
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/AdministratorManual'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/AdministratorManual'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/AdministratorManual'
Making install in ProgrammerManual
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual'
Making install in Tutorial
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual/Tutorial'
make[4]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual/Tutorial'
make[4]: Nothing to be done for `install-exec-am'.
make[4]: Nothing to be done for `install-data-am'.
make[4]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual/Tutorial'
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual/Tutorial'
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual'
make[4]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual'
make[4]: Nothing to be done for `install-exec-am'.
make[4]: Nothing to be done for `install-data-am'.
make[4]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual'
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/ProgrammerManual'
Making install in UserManual
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/UserManual'
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/UserManual'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/UserManual'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/UserManual'
Making install in lst
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/lst'
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/lst'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/lst'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc/lst'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc'
make[3]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc'
make[3]: Nothing to be done for `install-exec-am'.
make[3]: Nothing to be done for `install-data-am'.
make[3]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc'
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/doc'
Making install in samples
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/samples'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/samples'
make[2]: Nothing to be done for `install-exec-am'.
make[2]: Nothing to be done for `install-data-am'.
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/samples'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0/samples'
make[1]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0'
make[2]: Entering directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0'
make[2]: Nothing to be done for `install-exec-am'.
touch /etc/odbcinst.ini
touch /etc/odbc.ini
mkdir -p /etc/ODBCDataSources
/usr/bin/odbc_config --header > /usr/include/unixodbc_conf.h
make[2]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0'
make[1]: Leaving directory `/tmp/unixODBC.4656.11060.7585/unixODBC-2.3.0'


[root@localhost unixODBC-2.3.0]# /usr/bin/odbc_config --version
2.3.0
[root@localhost msodbcsql-11.0.2270.0]# odbc_config --odbcinstini
/etc/odbcinst.ini
[root@localhost msodbcsql-11.0.2270.0]# cat /etc/odbcinst.ini

INSTALLING THE MICROSOFT ODBC DRIVER 11 FOR SQL SERVER ON LINUX

[root@localhost msodbcsql-11.0.2270.0]# pwd
/root/msodbcsql-11.0.2270.0
[root@localhost msodbcsql-11.0.2270.0]# odbc_config --odbcinstini
/etc/odbcinst.ini


[root@localhost msodbcsql-11.0.2270.0]# cat /etc/odbcinst.ini
[root@localhost msodbcsql-11.0.2270.0]# ./install.sh

Microsoft ODBC Driver 11 for SQL Server Installation Script
Copyright Microsoft Corp.

Starting install for Microsoft ODBC Driver 11 for SQL Server

Unknown command given.
Usage: install.sh [global options] command [command options]

Global options:
   --help - prints this message
Valid commands are verify and install
  install) install the driver (also verifies before installing and registers
           with the driver manager)
  verify) check to make sure the unixODBC DriverManager configuration is
          correct before installing
install command take the following options:
  --bin-dir=<directory> - location to create symbolic links for bcp and sqlcmd utilities,
      defaults to the /usr/bin directory
  --lib-dir=<directory> - location to deposit the Microsoft SQL Server ODBC Driver for Linux,
      defaults to the /opt/microsoft/msodbcsql/lib directory
  --force - continues installation even if an error occurs
  --accept-license - forgoes showing the EULA and implies agreement with its contents

[root@localhost msodbcsql-11.0.2270.0]# ./install.sh verify

Microsoft ODBC Driver 11 for SQL Server Installation Script
Copyright Microsoft Corp.

Starting install for Microsoft ODBC Driver 11 for SQL Server

Checking for 64 bit Linux compatible OS ..................................... OK
Checking required libs are installed ........................................ OK
unixODBC utilities (odbc_config and odbcinst) installed ..................... OK
unixODBC Driver Manager version 2.3.0 installed ............................. OK
unixODBC Driver Manager configuration correct .............................. OK*
Microsoft ODBC Driver 11 for SQL Server already installed ............ NOT FOUND

Install log created at /tmp/msodbcsql.19893.20704.15951/install.log.

One or more steps may have an *. See README for more information regarding
these steps.
[root@localhost msodbcsql-11.0.2270.0]#

[root@localhost msodbcsql-11.0.2270.0]# ./install.sh install

Microsoft ODBC Driver 11 for SQL Server Installation Script
Copyright Microsoft Corp.

Starting install for Microsoft ODBC Driver 11 for SQL Server

MICROSOFT SOFTWARE LICENSE TERMS

MICROSOFT ODBC DRIVER 11 FOR SQL SERVER
MICROSOFT COMMAND LINE UTILITIES 11 FOR SQL SERVER

These license terms are an agreement between Microsoft Corporation (or based on
where you live, one of its affiliates) and you. Please read them. They apply to
the software named above, which includes the media on which you received it, if
any. The terms also apply to any Microsoft
<....Omitted for clear reading...>

Enter YES to accept the license or anything else to terminate the installation: YES

Checking for 64 bit Linux compatible OS ..................................... OK
Checking required libs are installed ........................................ OK
unixODBC utilities (odbc_config and odbcinst) installed ..................... OK
unixODBC Driver Manager version 2.3.0 installed ............................. OK
unixODBC Driver Manager configuration correct .............................. OK*
Microsoft ODBC Driver 11 for SQL Server already installed ............ NOT FOUND
Microsoft ODBC Driver 11 for SQL Server files copied ........................ OK
Symbolic links for bcp and sqlcmd created ................................... OK
Microsoft ODBC Driver 11 for SQL Server registered ................... INSTALLED

Install log created at /tmp/msodbcsql.16000.1297.15412/install.log.

One or more steps may have an *. See README for more information regarding
these steps.
[root@localhost msodbcsql-11.0.2270.0]#


[root@localhost msodbcsql-11.0.2270.0]# odbc_config --odbcinstini
/etc/odbcinst.ini
[root@localhost msodbcsql-11.0.2270.0]# cat /etc/odbcinst.ini
[ODBC Driver 11 for SQL Server]
Description=Microsoft ODBC Driver 11 for SQL Server
Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2270.0
Threading=1
UsageCount=1

[root@localhost msodbcsql-11.0.2270.0]# odbcinst -q -d -n "ODBC Driver 11 for SQL Server"
[ODBC Driver 11 for SQL Server]
Description=Microsoft ODBC Driver 11 for SQL Server
Driver=/opt/microsoft/msodbcsql/lib64/libmsodbcsql-11.0.so.2270.0
Threading=1
UsageCount=1

[root@localhost msodbcsql-11.0.2270.0]# /opt/microsoft/msodbcsql/bin/sqlcmd-11.0.2270.0
Microsoft (R) SQL Server Command Line Tool
Version 11.0.2270.0 Linux
Copyright (c) 2012 Microsoft. All rights reserved.

usage: sqlcmd            [-U login id]          [-P password]
  [-S server or Dsn if -D is provided]
  [-H hostname]          [-E trusted connection]
  [-N Encrypt Connection][-C Trust Server Certificate]
  [-d use database name] [-l login timeout]     [-t query timeout]
  [-h headers]           [-s colseparator]      [-w screen width]
  [-a packetsize]        [-e echo input]        [-I Enable Quoted Identifiers]
  [-c cmdend]
  [-q "cmdline query"]   [-Q "cmdline query" and exit]
  [-m errorlevel]        [-V severitylevel]     [-W remove trailing spaces]
  [-u unicode output]    [-r[0|1] msgs to stderr]
  [-i inputfile]         [-o outputfile]
  [-k[1|2] remove[replace] control characters]
  [-y variable length type display width]
  [-Y fixed length type display width]
  [-p[1] print statistics[colon format]]
  [-R use client regional setting]
  [-K application intent]
  [-M multisubnet failover]
  [-b On error batch abort]
  [-D Dsn flag, indicate -S is Dsn]
  [-X[1] disable commands, startup script, environment variables [and exit]]
  [-x disable variable substitution]
  [-? show syntax summary]

[root@localhost msodbcsql-11.0.2270.0]# /opt/microsoft/msodbcsql/bin/bcp-11.0.2270.0
usage: /opt/microsoft/msodbcsql/bin/bcp-11.0.2270.0 {dbtable | query} {in | out | queryout | format} datafile
  [-m maxerrors]            [-f formatfile]          [-e errfile]
  [-F firstrow]             [-L lastrow]             [-b batchsize]
  [-n native type]          [-c character type]      [-w wide character type]
  [-N keep non-text native] [-q quoted identifier]
  [-t field terminator]     [-r row terminator]
  [-a packetsize]           [-K application intent]
  [-S server name or DSN if -D provided]             [-D treat -S as DSN]
  [-U username]             [-P password]
  [-T trusted connection]   [-v version]             [-R regional enable]
  [-k keep null values]     [-E keep identity values]
  [-h "load hints"]         [-d database name]
 
[root@localhost msodbcsql-11.0.2270.0]# cat ~/.odbc.ini
[MSSQLPROD1]
Driver = ODBC Driver 11 for SQL Server
Server = tcp:192.168.6.132,49160
#Server = 192.168.6.132,49160

 
[root@localhost msodbcsql-11.0.2270.0]# isql MSSQLPROD1 sa p_ssw0rd
SQL>  select cast(@@version as char(100))
+-----------------------------------------------------------------------------------------------------+
|                                                                                                     |
+-----------------------------------------------------------------------------------------------------+
| Microsoft SQL Server 2014 - 12.0.2000.8 (X64)
        Feb 20 2014 20:04:26
        Copyright (c) Microsoft Corpo|
+-----------------------------------------------------------------------------------------------------+
SQLRowCount returns 0
1 rows fetched
SQL> quit
[root@localhost msodbcsql-11.0.2270.0]#

[root@localhost msodbcsql-11.0.2270.0]# cat ~/.odbc.ini
[MSSQLPROD1]
Driver = ODBC Driver 11 for SQL Server
Server = tcp:192.168.6.132,49160
#Server = 192.168.6.132,49160
Database = Northwnd

[root@localhost msodbcsql-11.0.2270.0]# isql MSSQLPROD1 sa p_ssw0rd
+---------------------------------------+
| Connected!                            |
|                                       |
| sql-statement                         |
| help [tablename]                      |
| quit                                  |
|                                       |
+---------------------------------------+
SQL> select * from dbo.region
+------------+---------------------------------------------------+
| RegionID   | RegionDescription                                 |
+------------+---------------------------------------------------+
| 1          | Eastern                                           |
| 2          | Western                                           |
| 3          | Northern                                          |
| 4          | Southern                                          |
| 5          | Central                                           |
| 6          | Non-USA                                           |
| 7          | Singapore                                         |
| 8          | China                                             |
| 9          | Duplicated                                        |
| 10         | Duplicated2                                       |
+------------+---------------------------------------------------+
SQLRowCount returns 0
10 rows fetched

[root@localhost msodbcsql-11.0.2270.0]# /opt/microsoft/msodbcsql/bin/sqlcmd-11.0.2270.0 -S 192.168.6.132,49160 -U sa
Password:
1> select @@version
2> go
                                                                                                                                                      
-----------------------------------------------------------------------------------------------------------------
Microsoft SQL Server 2014 - 12.0.2000.8 (X64)
        Feb 20 2014 20:04:26
        Copyright (c) Microsoft Corporation
        Enterprise Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)


(1 rows affected)

1> select * from northwnd.dbo.region
2> go
RegionID    RegionDescription
----------- --------------------------------------------------
          1 Eastern
          2 Western
          3 Northern
          4 Southern
          5 Central
          6 Non-USA
          7 Singapore
          8 China
          9 Duplicated
         10 Duplicated2

(10 rows affected)
1> quit

[root@localhost msodbcsql-11.0.2270.0]# /opt/microsoft/msodbcsql/bin/sqlcmd-11.0.2270.0 -S MSSQLPROD1 -D -U sa -P p_ssw0rd
1> select @@version;
2> go
                                                                                                                                                      
------------------------------------------------------------------------
Microsoft SQL Server 2014 - 12.0.2000.8 (X64)
        Feb 20 2014 20:04:26
        Copyright (c) Microsoft Corporation
        Enterprise Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)


(1 rows affected)
1> quit

ORA-15410: Disks in disk group DATA do not have equal size.

$
0
0
SQL> select failgroup,path,os_mb from v$asm_disk;

FAILGROUP                      PATH                                OS_MB
------------------------------ ------------------------------ ----------
                               ORCL:D5GD1                           5119
                               ORCL:D5GD4                           5119
                               ORCL:D5GD2                           5119
                               ORCL:D5GD3                           5119
D2GD1                          ORCL:D2GD1                           2047
D2GD2                          ORCL:D2GD2                           2047
D2GD3                          ORCL:D2GD3                           2047

7 rows selected.

SQL> alter diskgroup data add failgroup fg1 disk 'ORCL:D5GD1';
alter diskgroup data add failgroup fg1 disk 'ORCL:D5GD1'
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15410: Disks in disk group DATA do not have equal size.

SQL> alter diskgroup data add failgroup fg1 disk 'ORCL:D5GD1','ORCL:D5GD2' failgroup fg2 disk 'ORCL:D5GD3','ORCL:D5GD4'
  2  drop disk 'D2GD1','D2GD2','D2GD3';

Diskgroup altered.


CAUSE


1) Starting on 12.1.0.2 ASM release, this ASM constraint/validation is available:
15410, 00000, "Disks in disk group %s do not have equal size."
// *Cause: The disks in the diskgroup were not of equal size.
// *Action: Ensure that all disks in the diskgroup are of equal size. If
//          adding new disks to the diskgroup, their size must be equal to
//          the size of the existing disks in the diskgroup. If resizing, all
//          disks in the diskgroup must be resized to the same size.

2) Disks with uneven capacity can create allocation problems (e.g. "ORA-15041: diskgroup space exhausted") that prevent full use of all of the available storage in the failgroup /diskgroup.  

3) This validation/constraint ensure that all disks in the same diskgroup have the same size, doing so provides more predictable overall performance and space utilization.

4) If the disks are the same size, then ASM spreads the files evenly across all of the disks in the diskgroup. This allocation pattern maintains every disk at the same capacity level and ensures that all of the disks in a diskgroup have the same I/O load. Because ASM load balances workload among all of the disks in a diskgroup, different ASM disks should not share the same physical drive.
5) This ASM new feature is enabled by default on '12.1.0.2' Grid Infrastructure/ASM release and onwards.

SQL Server: Cannot grant, deny, or revoke permissions to sa, dbo, entity owner, information_schema, sys, or yourself

$
0
0

USE [NORTHWND];
GO
--Create temporary principal
CREATE LOGIN login1 WITH PASSWORD = 'J345#$)thb';
GO
CREATE USER user1 FOR LOGIN login1;
GRANT CREATE SCHEMA to user1;
GRANT CREATE TABLE to user1;
GO
--Display current execution context.
SELECT SUSER_NAME(), USER_NAME();
-- Set the execution context to login1.
EXECUTE AS LOGIN = 'login1';
--Verify the execution context is now login1.
SELECT SUSER_NAME(), USER_NAME();
--Create schema & table
create schema user1;
create table user1.TBL1 (ID INTEGER PRIMARY KEY, Name VARCHAR(200));
GO
SELECT * FROM user1.TBL1;
-- DENY SELECT ON [user1].[TBL1] TO [user1]
-- Cannot grant, deny, or revoke permissions to sa, dbo, entity owner, information_schema, sys, or yourself.
REVERT;
--Clean up
drop table user1.TBL1;
drop schema user1;
drop user user1;
drop login login1;

Business Intelligence in SQL Server 2014 - SQL Server Data Tools (SSDT)

$
0
0

SQL Server Data Tools for Business Intelligence (SSDT BI), previously known as Business Intelligence Development Studio (BIDS), is used to create Analysis Services models, Reporting Services reports, and Integration Services packages. In this pre-release version of SQL Server 2014, SQL Server Setup no longer installs SSDT BI.

You can download SSDT-BI from the following locations:

• Download SSDT-BI for Visual Studio 2013


• Download SSDT-BI for Visual Studio 2012

Enable DB2 statement monitoring using event trace

$
0
0

C:\Program Files\IBM\SQLLIB\BIN>db2 connect to SAMPLEDB

   Database Connection Information

 Database server        = DB2/NT64 9.7.6
 SQL authorization ID   = ADMINIST...
 Local database alias   = SAMPLEDB

C:\Program Files\IBM\SQLLIB\BIN>db2 get monitor switches

            Monitor Recording Switches
Buffer Pool Activity Information  (BUFFERPOOL) = OFF
Lock Information                        (LOCK) = OFF
Sorting Information                     (SORT) = OFF
SQL Statement Information          (STATEMENT) = OFF
Table Activity Information             (TABLE) = OFF
Take Timestamp Information         (TIMESTAMP) = ON  03/31/2015 15:34:19.915686
Unit of Work Information                 (UOW) = OFF


C:\Program Files\IBM\SQLLIB\BIN>db2 update monitor switches using statement on
DB20000I  The UPDATE MONITOR SWITCHES command completed successfully.

C:\Program Files\IBM\SQLLIB\BIN>db2 create event monitor stmon for statements write to file 'C:\temp'
DB20000I  The SQL command completed successfully.


C:\Program Files\IBM\SQLLIB\BIN>db2 set event monitor stmon state=1
DB20000I  The SQL command completed successfully.

C:\Program Files\IBM\SQLLIB\BIN>db2 create table tbl_donghua(id integer)
DB20000I  The SQL command completed successfully.

C:\Program Files\IBM\SQLLIB\BIN>db2 insert into tbl_donghua values (1)
DB20000I  The SQL command completed successfully.

C:\Program Files\IBM\SQLLIB\BIN>db2 select * from tbl_donghua

ID
-----------
          1

  1 record(s) selected.


C:\Program Files\IBM\SQLLIB\BIN>db2 drop table tbl_donghua
DB20000I  The SQL command completed successfully.

C:\Program Files\IBM\SQLLIB\BIN>db2 set event monitor stmon state 0
DB20000I  The SQL command completed successfully.

C:\Program Files\IBM\SQLLIB\BIN>db2evmon -path c:\temp > c:\temp\db2stmt.sql

Reading c:\temp\00000000.EVT ...

C:\Program Files\IBM\SQLLIB\BIN>db2 drop event monitor stmon
DB20000I  The SQL command completed successfully.

C:\Program Files\IBM\SQLLIB\BIN>db2 update monitor switches using statement off
DB20000I  The UPDATE MONITOR SWITCHES command completed successfully.

C:\Program Files\IBM\SQLLIB\BIN>db2 get monitor switches

            Monitor Recording Switches

Switch list for db partition number 0
Buffer Pool Activity Information  (BUFFERPOOL) = OFF
Lock Information                        (LOCK) = OFF
Sorting Information                     (SORT) = OFF
SQL Statement Information          (STATEMENT) = OFF
Table Activity Information             (TABLE) = OFF
Take Timestamp Information         (TIMESTAMP) = ON  03/31/2015 15:34:19.915686
Unit of Work Information                 (UOW) = OFF




5) Statement Event ...
  Appl Handle: 125
  Appl Id: *LOCAL.DB2.150331075101
  Appl Seq number: 00004

  Record is the result of a flush: FALSE
  -------------------------------------------
  Type     : Dynamic
  Operation: Execute Immediate
  Section  : 203
  Creator  : NULLID  
  Package  : SQLC2H23
  Consistency Token  : AAAAABBc
  Package Version ID  : 
  Cursor   : 
  Cursor was blocking: FALSE
  Text     : create table tbl_donghua(id integer)
  -------------------------------------------
  Start Time: 03/31/2015 15:54:26.746074
  Stop Time:  03/31/2015 15:54:26.852143
  Elapsed Execution Time:  0.106069 seconds
  Number of Agents created: 1
  User CPU: 0.015625 seconds
  System CPU: 0.000000 seconds
  Statistic fabrication time (milliseconds): 0
  Synchronous runstats time  (milliseconds): 0
  Fetch Count: 0
  Sorts: 0
  Total sort time: 0
  Sort overflows: 0
  Rows read: 10
  Rows written: 7
  Internal rows deleted: 0
  Internal rows updated: 0
  Internal rows inserted: 0
  Bufferpool data logical reads: 0
  Bufferpool data physical reads: 0
  Bufferpool temporary data logical reads: 0
  Bufferpool temporary data physical reads: 0
  Bufferpool index logical reads: 0
  Bufferpool index physical reads: 0
  Bufferpool temporary index logical reads: 0
  Bufferpool temporary index physical reads: 0
  Bufferpool xda logical page reads: 0
  Bufferpool xda physical page reads: 0
  Bufferpool temporary xda logical page reads: 0
  Bufferpool temporary xda physical page reads: 0
  SQLCA:
   sqlcode: 0
   sqlstate: 00000

42) Statement Event ...
  Appl Handle: 125
  Appl Id: *LOCAL.DB2.150331075101
  Appl Seq number: 00005

  Record is the result of a flush: FALSE
  -------------------------------------------
  Type     : Dynamic
  Operation: Execute Immediate
  Section  : 203
  Creator  : NULLID  
  Package  : SQLC2H23
  Consistency Token  : AAAAABBc
  Package Version ID  : 
  Cursor   : 
  Cursor was blocking: FALSE
  Text     : insert into tbl_donghua values (1)
44) Statement Event ...
  Appl Handle: 125
  Appl Id: *LOCAL.DB2.150331075101
  Appl Seq number: 00006

  Record is the result of a flush: FALSE
  -------------------------------------------
  Type     : Dynamic
  Operation: Prepare
  Section  : 201
  Creator  : NULLID  
  Package  : SQLC2H23
  Consistency Token  : AAAAABBc
  Package Version ID  : 
  Cursor   : SQLCUR201
  Cursor was blocking: FALSE
  Text     : select * from tbl_donghua
45) Statement Event ...
  Appl Handle: 125
  Appl Id: *LOCAL.DB2.150331075101
  Appl Seq number: 00006

  Record is the result of a flush: FALSE
  -------------------------------------------
  Type     : Dynamic
  Operation: Open
  Section  : 201
  Creator  : NULLID  
  Package  : SQLC2H23
  Consistency Token  : AAAAABBc
  Package Version ID  : 
  Cursor   : SQLCUR201
  Cursor was blocking: TRUE
  Text     : select * from tbl_donghua

Adding Oracle RAC Service using srvctl command

$
0
0

[oracle@vmxrac01 ~]$ srvctl add service -db orcl -service orcl_hr -preferred orcl1,orcl2
[oracle@vmxrac01 ~]$
srvctl start service -d orcl -service orcl_hr
[oracle@vmxrac01 ~]$ srvctl config service  -db orcl -service orcl_hr
Service name: orcl_hr
Server pool:
Cardinality: 2
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Global: false
Commit Outcome: false
Failover type:
Failover method:
TAF failover retries:
TAF failover delay:
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name:
Maximum lag time: ANY
SQL Translation Profile:
Retention: 86400 seconds
Replay Initiation Time: 300 seconds
Session State Consistency:
GSM Flags: 0
Service is enabled
Preferred instances: orcl1,orcl2
Available instances:
[oracle@vmxrac01 ~]$

[oracle@vmxrac01 ~]$ crsctl status res ora.orcl.orcl_hr.svc
NAME=ora.orcl.orcl_hr.svc
TYPE=ora.service.type
TARGET=ONLINE            , ONLINE
STATE=ONLINE on vmxrac01, ONLINE on vmxrac02

[oracle@vmxrac01 ~]$ crsctl status res ora.orcl.orcl_hr.svc -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.orcl.orcl_hr.svc
      1        ONLINE  ONLINE       vmxrac01                 STABLE
      2        ONLINE  ONLINE       vmxrac02                 STABLE
--------------------------------------------------------------------------------

[oracle@vmxrac01 ~]$ crsctl status res ora.orcl.orcl_hr.svc -p
NAME=ora.orcl.orcl_hr.svc
TYPE=ora.service.type
ACL=owner:oracle:rwx,pgrp:oinstall:r--,other::r--,group:dba:r-x,user:oracle:r-x
ACTIONS=
ACTION_SCRIPT=
ACTION_TIMEOUT=60
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/oraagent%CRS_EXE_SUFFIX%
AGENT_PARAMETERS=
AQ_HA_NOTIFICATION=0
AUTO_START=restore
CARDINALITY=2
CHECK_INTERVAL=600
CHECK_TIMEOUT=30
CLB_GOAL=LONG
CLEAN_TIMEOUT=60
COMMIT_OUTCOME=0
DEGREE=1
DELETE_TIMEOUT=60
DESCRIPTION=Oracle Service resource
DTP=0
EDITION=
ENABLED=1
FAILOVER_DELAY=0
FAILOVER_METHOD=
FAILOVER_RETRIES=
FAILOVER_TYPE=
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
GEN_SERVICE_NAME=orcl_hr
GLOBAL=false
GSM_FLAGS=0
HOSTING_MEMBERS=
INSTANCE_FAILOVER=1
INTERMEDIATE_TIMEOUT=0
LOAD=1
LOGGING_LEVEL=1
MANAGEMENT_POLICY=AUTOMATIC
MAX_LAG_TIME=ANY
MODIFY_TIMEOUT=60
NLS_LANG=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PLUGGABLE_DATABASE=
RELOCATE_BY_DEPENDENCY=1
REPLAY_INITIATION_TIME=300
RESTART_ATTEMPTS=0
RETENTION=86400
RLB_GOAL=NONE
ROLE=PRIMARY
SCRIPT_TIMEOUT=60
SERVER_CATEGORY=
SERVER_POOLS=ora.orcl_orcl_hr
SERVICE_NAME=orcl_hr
SERVICE_NAME_PQ=
SERVICE_TYPE=MAIN
SESSION_NOREPLAY=false
SESSION_STATE_CONSISTENCY=
SQL_TRANSLATION_PROFILE=
START_CONCURRENCY=0
START_DEPENDENCIES=hard(ora.orcl.db,type:ora.cluster_vip_net1.type) weak(type:ora.listener.type) pullup(type:ora.cluster_vip_net1.type) pullup:always(ora.orcl.db)
START_TIMEOUT=600
STOP_CONCURRENCY=0
STOP_DEPENDENCIES=hard(intermediate:ora.orcl.db,type:ora.cluster_vip_net1.type)
STOP_TIMEOUT=600
TAF_FAILOVER_DELAY=
TAF_POLICY=NONE
TYPE_VERSION=3.2
UPTIME_THRESHOLD=1h
USER_WORKLOAD=yes
USE_STICKINESS=0
USR_ORA_DISCONNECT=false
USR_ORA_ENV=
USR_ORA_FLAGS=
USR_ORA_OPEN_MODE=
USR_ORA_OPI=false
USR_ORA_STOP_MODE=


Enable Innodb lock monitor

$
0
0
mysql> prompt session 3>
PROMPT set to 'prompt session 3 '

session 3> CREATE TABLE innodb_lock_monitor (a INT) ENGINE=INNODB;
Query OK, 0 rows affected, 1 warning (0.03 sec)

As of MySQL 5.6.16, you can also enable the InnoDB Lock Monitor by setting the innodb_status_output_locks system variable to ON. As with the CREATE TABLE method for enabling InnoDB Monitors, both the InnoDB standard Monitor and InnoDB Lock Monitor must be enabled to have InnoDBLock Monitor data printed periodically:


session 3> status;
--------------
/usr/local/mysql/bin/mysql  Ver 14.14 Distrib 5.6.23, for linux-glibc2.5 (x86_64) using  EditLine wrapper

Connection id:          2
Current database:       test
Current user:           root@localhost
SSL:                    Not in use
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ;
Server version:         5.6.23-enterprise-commercial-advanced-log MySQL Enterprise Server - Advanced Edition (Commercial)
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:            /tmp/mysql-server1.sock
Uptime:                 2 hours 40 min 41 sec

Threads: 3  Questions: 314  Slow queries: 1  Opens: 75  Flush tables: 1  Open tables: 68  Queries per second avg: 0.032
--------------


session 3> set GLOBAL innodb_status_output=ON;
Query OK, 0 rows affected (0.00 sec)

session 3> set GLOBAL innodb_status_output_locks=ON;
Query OK, 0 rows affected (0.00 sec)



session 3> show engine innodb status \G
*************************** 1. row ***************************
  Type: InnoDB
  Name:
Status:
=====================================
2015-04-07 14:04:05 7f0fed325700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 10 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 47 srv_active, 0 srv_shutdown, 7787 srv_idle
srv_master_thread log flush and writes: 7834
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 19
OS WAIT ARRAY INFO: signal count 19
Mutex spin waits 1, rounds 30, OS waits 0
RW-shared spins 16, rounds 480, OS waits 16
RW-excl spins 0, rounds 90, OS waits 3
Spin rounds per wait: 30.00 mutex, 30.00 RW-shared, 90.00 RW-excl
------------
TRANSACTIONS
------------
Trx id counter 3020
Purge done for trx's n:o < 3016 undo n:o < 0 state: running but idle
History list length 17
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 2, OS thread handle 0x7f0fed325700, query id 301 localhost root init
show engine innodb status
---TRANSACTION 3019, ACTIVE 2 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 360, 1 row lock(s)
MySQL thread id 4, OS thread handle 0x7f0fed2a3700, query id 300 localhost root updating
delete from tbl_inno
------- TRX HAS BEEN WAITING 2 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 6 page no 4 n bits 120 index `GEN_CLUST_INDEX` of table `test`.`tbl_inno` trx id 3019 lock_mode X waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 5; compact format; info bits 32
 0: len 6; hex 000000000200; asc       ;;
 1: len 6; hex 000000000bb8; asc       ;;
 2: len 7; hex 2b000001bb0110; asc +      ;;
 3: len 4; hex 80000001; asc     ;;
 4: len 30; hex 612020202020202020202020202020202020202020202020202020202020; asc a                             ; (total 255 bytes);

------------------
TABLE LOCK table `test`.`tbl_inno` trx id 3019 lock mode IX
RECORD LOCKS space id 6 page no 4 n bits 120 index `GEN_CLUST_INDEX` of table `test`.`tbl_inno` trx id 3019 lock_mode X waiting
Record lock, heap no 2 PHYSICAL RECORD: n_fields 5; compact format; info bits 32
 0: len 6; hex 000000000200; asc       ;;
 1: len 6; hex 000000000bb8; asc       ;;
 2: len 7; hex 2b000001bb0110; asc +      ;;
 3: len 4; hex 80000001; asc     ;;
 4: len 30; hex 612020202020202020202020202020202020202020202020202020202020; asc a                             ; (total 255 bytes);

---TRANSACTION 3000, ACTIVE 1294 sec
5 lock struct(s), heap size 1184, 153 row lock(s), undo log entries 149
MySQL thread id 3, OS thread handle 0x7f0fed2e4700, query id 205 localhost root cleaning up
TABLE LOCK table `test`.`tbl_inno` trx id 3000 lock mode IX
RECORD LOCKS space id 6 page no 4 n bits 120 index `GEN_CLUST_INDEX` of table `test`.`tbl_inno` trx id 3000 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
 0: len 8; hex 73757072656d756d; asc supremum;;

Record lock, heap no 2 PHYSICAL RECORD: n_fields 5; compact format; info bits 32
 0: len 6; hex 000000000200; asc       ;;
 1: len 6; hex 000000000bb8; asc       ;;
 2: len 7; hex 2b000001bb0110; asc +      ;;
 3: len 4; hex 80000001; asc     ;;
 4: len 30; hex 612020202020202020202020202020202020202020202020202020202020; asc a                             ; (total 255 bytes);

Record lock, heap no 3 PHYSICAL RECORD: n_fields 5; compact format; info bits 32
 0: len 6; hex 000000000201; asc       ;;
 1: len 6; hex 000000000bb8; asc       ;;
 2: len 7; hex 2b000001bb0136; asc +     6;;
 3: len 4; hex 80000001; asc     ;;
 4: len 30; hex 612020202020202020202020202020202020202020202020202020202020; asc a                             ; (total 255 bytes);


RECORD LOCKS space id 6 page no 5 n bits 120 index `GEN_CLUST_INDEX` of table `test`.`tbl_inno` trx id 3000 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
 0: len 8; hex 73757072656d756d; asc supremum;;

Record lock, heap no 2 PHYSICAL RECORD: n_fields 5; compact format; info bits 32
 0: len 6; hex 00000000021a; asc       ;;
 1: len 6; hex 000000000bb8; asc       ;;
 2: len 7; hex 2b000001bb04ec; asc +      ;;
 3: len 4; hex 80000001; asc     ;;
 4: len 30; hex 612020202020202020202020202020202020202020202020202020202020; asc a                             ; (total 255 bytes);

--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
181 OS file reads, 669 OS file writes, 295 OS fsyncs
0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 0, seg size 2, 0 merges
merged operations:
 insert 0, delete mark 0, delete 0
discarded operations:
 insert 0, delete mark 0, delete 0
Hash table size 276671, node heap has 1 buffer(s)
0.00 hash searches/s, 0.00 non-hash searches/s
---
LOG
---
Log sequence number 1868688
Log flushed up to   1868688
Pages flushed up to 1868688
Last checkpoint at  1868688
0 pending log writes, 0 pending chkp writes
233 log i/o's done, 0.00 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 137363456; in additional pool allocated 0
Dictionary memory allocated 70412
Buffer pool size   8191
Free buffers       7874
Database pages     316
Old database pages 0
Modified db pages  0
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 0, not young 0
0.00 youngs/s, 0.00 non-youngs/s
Pages read 165, created 151, written 414
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 316, unzip_LRU len: 0
I/O sum[0]:cur[0], unzip sum[0]:cur[0]
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
0 read views open inside InnoDB
Main thread process no. 2448, id 139706340439808, state: sleeping
Number of rows inserted 149, updated 0, deleted 149, read 298
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================

1 row in set (0.00 sec)

Change Oracle RAC SCAN from host file to DNS

$
0
0
Step 1: verify the DNS is working and remove the host file record 

root@vmxdb01:~# nslookup
> vmxdb-scan.dbaglobe.com
Server:         192.168.1.1
Address:        192.168.1.1#53

Name:   vmxdb-scan.dbaglobe.com
Address: 192.168.1.20
Name:   vmxdb-scan.dbaglobe.com
Address: 192.168.1.21
Name:   vmxdb-scan.dbaglobe.com
Address: 192.168.1.19

root@vmxdb01:~# vi /etc/hosts

## temporary hardcode scan host
#192.168.1.19    vmxdb-scan      vmxdb-scan.dbaglobe.com


Step 2: Stop the scan listener and scan


root@vmxdb01:~# srvctl stop scan_listener
root@vmxdb01:~# srvctl stop scan

Step 3: Modify the scan based on the IPs returned by scan host name
root@vmxdb01:~# srvctl config scan
SCAN name: vmxdb-scan, Network: 1/192.168.1.64/255.255.255.192/net0
SCAN VIP name: scan1, IP: /vmxdb-scan/192.168.1.19

root@vmxdb01:~# srvctl modify scan -n vmxdb-scan
root@vmxdb01:~# srvctl config scan 
SCAN name: vmxdb-scan, Network: 1/192.168.1.64/255.255.255.192/net0
SCAN VIP name: scan1, IP: /vmxdb-scan/192.168.1.19
SCAN VIP name: scan2, IP: /vmxdb-scan/192.168.1.20
SCAN VIP name: scan3, IP: /vmxdb-scan/192.168.1.21

Step 4: Update scan listener based on SCAN IPs

root@vmxdb01:~# srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1522

root@vmxdb01:~# srvctl modify scan_listener -u
root@vmxdb01:~# srvctl config scan_listener 
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1522
SCAN Listener LISTENER_SCAN2 exists. Port: TCP:1522
SCAN Listener LISTENER_SCAN3 exists. Port: TCP:1522

Step 5: Start the scan listener and scan

root@vmxdb01:~# srvctl start scan
root@vmxdb01:~# srvctl start scan_listener

Quick fix for error "memory_target needs larger /dev/shm"

$
0
0
2015-04-07 11:52:44.455000 +08:00
Starting ORACLE instance (normal) (OS id: 2292)
CLI notifier numLatches:3 maxDescs:519
WARNING: You are trying to use the MEMORY_TARGET feature. This feature requires the /dev/shm file system to be mounted for at least 1342177280 bytes. /dev/shm is either not mounted or is mounted with available space less than this size. Please fix this so that MEMORY_TARGET can work as expected. Current available is 1025826816 and used is 665677824 bytes. Ensure that the mount point is /dev/shm for this directory.
memory_target needs larger /dev/shm

[oracle@vmxdb01 ~]$ df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G   29G  7.9G  79% /
devtmpfs             1.6G     0  1.6G   0% /dev
tmpfs                1.6G  635M  979M  40% /dev/shm


[root@vmxdb01 ~]# echo "tmpfs      /dev/shm      tmpfs   defaults,size=2g   0   0">> /etc/fstab

[root@vmxdb01 ~]# mount tmpfs
[root@vmxdb01 ~]# df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G   29G  7.9G  79% /
devtmpfs             1.6G     0  1.6G   0% /dev
tmpfs                2.0G     0  2.0G   0% /dev/shm
tmpfs                1.6G  8.9M  1.6G   1% /run
tmpfs                1.6G     0  1.6G   0% /sys/fs/cgroup
/dev/sda1            997M  223M  774M  23% /boot
tmpfs                2.0G     0  2.0G   0% /dev/shm


-- Start Oracle database 

2015-04-09 21:37:48.768000 +08:00
Starting ORACLE instance (normal) (OS id: 3273)
CLI notifier numLatches:3 maxDescs:519
**********************************************************************
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)
 Per process system memlock (soft) limit = 128G
 Expected per process system memlock (soft) limit to lock
 SHARED GLOBAL AREA (SGA) into memory: 1280M
 Available system pagesizes:
  4K, 2048K
 Supported system pagesize(s):
  PAGESIZE  AVAILABLE_PAGES  EXPECTED_PAGES  ALLOCATED_PAGES  ERROR(s)
        4K       Configured          327682          327682        NONE
 Reason for not supporting certain system pagesizes:
  2048K - Dynamic allocate and free memory regions
**********************************************************************

[oracle@vmxdb01 ~]$ df -h
Filesystem           Size  Used Avail Use% Mounted on
/dev/mapper/ol-root   36G   29G  7.9G  79% /
devtmpfs             1.6G     0  1.6G   0% /dev
tmpfs                2.0G  1.5G  614M  71% /dev/shm
tmpfs                1.6G  8.9M  1.6G   1% /run
tmpfs                1.6G     0  1.6G   0% /sys/fs/cgroup
/dev/sda1            997M  223M  774M  23% /boot

tmpfs                2.0G  1.5G  614M  71% /dev/shm

Change the init level on Redhat Linux 7

$
0
0
# systemd uses 'targets' instead of runlevels. By default, there are two main targets:
#
# multi-user.target: analogous to runlevel 3

# graphical.target: analogous to runlevel 5


[root@vmxdb01 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.1 (Maipo)

[root@vmxdb01 ~]# systemctl get-default
graphical.targt

[root@vmxdb01 ~]#  systemctl set-default multi-user.target
rm '/etc/systemd/system/default.target'
ln -s '/usr/lib/systemd/system/multi-user.target''/etc/systemd/system/default.target'

[root@vmxdb01 ~]# systemctl get-default
multi-user.target

How To Set the AUDIT_SYSLOG_LEVEL Paramete

$
0
0

1. Edit /etc/syslog.conf (upto RHEL5) or /etc/rsyslog.conf (RHEL6 onwards) to including following lines 

(Must put lines before line "*.info ...", otherwise captured to /var/log/messages, rather than /var/log/oracle-audit)

# Classify Oracle audit log into local1.warning
local1.warning    /var/log/oracle-audit.log

*.info;mail.none;authpriv.none;cron.none                /var/log/messages

2. Restart syslogd  or rsyslogd service

[root@vmxdb01 ~]# service syslogd restart <-- font="" rhel5="">-->
[root@vmxdb01 ~]# systemctl restart rsyslog.service <-- font="" rhel7="">-->

3. Modify Oracle parameter audit_syslog_level & audit_trail

SQL> show parameter audit

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest                      string      /u01/app/oracle/admin/cdborcl/
                                                 adump
audit_sys_operations                 boolean     TRUE
audit_syslog_level                   string
audit_trail                          string      DB
unified_audit_sga_queue_size         integer     1048576

SQL> alter system set audit_trail=OS scope=spfile;

System altered.

SQL> alter system set audit_syslog_level="local1.warning" scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1342177280 bytes
Fixed Size                  2924160 bytes
Variable Size             855638400 bytes
Database Buffers          469762048 bytes
Redo Buffers               13852672 bytes
Database mounted.
Database opened.
SQL> show parameter audit

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest                      string      /u01/app/oracle/admin/cdborcl/
                                                 adump
audit_sys_operations                 boolean     TRUE
audit_syslog_level                   string      LOCAL1.WARNING
audit_trail                          string      OS
unified_audit_sga_queue_size         integer     1048576

4. Verify Oracle Audit Log generated

[root@vmxdb01 log]# tail -f /var/log/oracle-audit.log
Apr  9 22:18:27 vmxdb01 journal: Oracle Audit[5190]: LENGTH: "274" SESSIONID:[6] "190024" ENTRYID:[1] "1" STATEMENT:[1] "1" USERID:[10] "C##DONGHUA" USERHOST:[20] "vmxdb01.dbaglobe.com" TERMINAL:[5] "pts/1" ACTION:[3] "100" RETURNCODE:[4] "1045" COMMENT$TEXT:[26] "Authenticated by: DATABASE" OS$USERID:[6] "oracle" DBID:[10] "2860248834"
Apr  9 22:18:45 vmxdb01 journal: Oracle Audit[5196]: LENGTH: "283" SESSIONID:[6] "200019" ENTRYID:[1] "1" STATEMENT:[1] "1" USERID:[6] "SYSTEM" USERHOST:[20] "vmxdb01.dbaglobe.com" TERMINAL:[5] "pts/1" ACTION:[3] "100" RETURNCODE:[1] "0" COMMENT$TEXT:[26] "Authenticated by: DATABASE" OS$USERID:[6] "oracle" DBID:[10] "2860248834" PRIV$USED:[1] "5"
Apr  9 22:18:59 vmxdb01 journal: Oracle Audit[5196]: LENGTH: "227" SESSIONID:[6] "200019" ENTRYID:[1] "1" USERID:[6] "SYSTEM" ACTION:[3] "101" RETURNCODE:[1] "0" LOGOFF$PREAD:[1] "4" LOGOFF$LREAD:[4] "3013" LOGOFF$LWRITE:[2] "20" LOGOFF$DEAD:[1] "0" DBID:[10] "2860248834" SESSIONCPU:[2] "13"
Apr  9 22:18:59 vmxdb01 journal: Oracle Audit[5205]: LENGTH: "288" SESSIONID:[6] "200020" ENTRYID:[1] "1" STATEMENT:[1] "1" USERID:[10] "C##DONGHUA" USERHOST:[20] "vmxdb01.dbaglobe.com" TERMINAL:[5] "pts/1" ACTION:[3] "100" RETURNCODE:[1] "0" COMMENT$TEXT:[26] "Authenticated by: DATABASE" OS$USERID:[6] "oracle" DBID:[10] "2860248834" PRIV$USED:[1] "5"


More information, refer to Oracle support article: How To Set the AUDIT_SYSLOG_LEVEL Parameter? (Doc ID 553225.1)
Viewing all 604 articles
Browse latest View live