SAS Web Applications Link…

Below is the SAS Web Links for some of the SAS Web Applications along with the related SAS WebApplication Server that this web application is deployed on. the port mentioned below is the Web Server port (default is 80 for Windows and 7980 for Linux)

Web Application Link WebAppServer
SAS Stored Process Web Application http://ServerHost:Port/SASStoredProcess/do SASServer1
SAS Web Administration Console http://ServerHost:Port/SASAdmin
SAS Information Delivery Portal http://ServerHost:Port/SASPortal
SAS BI Dashboard http://ServerHost:Port/SASBIDashboard
content server http://Serverhost:port/SASContentServer/dircontents.jsp
SAS Studio Mid-Tier http://ServerHost:Port/SASStudio SASServer2
SAS Web Report Studio http://ServerHost:Port/SASWebReportStudio
SAS Help Viewer http://ServerHost:Port/SASWebDoc
SAS Customer Intelligence http://ServerHost:Port/SASCIStudio SASServer6
SAS Decision Services Design Server Diagnostics URL:
http://ServerHost:Port/RTDMDesign/jsp/Diagnostics.jsp
SASServer7
SAS Decision Services Engine Server Web Service URL:
http://ServerHost:Port/RTDM/EventDiagnostics URL:
http://ServerHost:Port/RTDM/Diagnostics.jsp
SAS Decision Services Monitor Diagnostics URL:
http://ServerHost:Port/DCSVMonitor/jsp/Diagnostics.jsp
SAS Business Rules Manager Web http://ServerHost:Port/SASDecisionManager
SAS Enterprise Case Management http://ServerHost:Port/SASEntCaseManagement SASServer8
SAS Social Network Analysis http://ServerHost:Port/SASSNA
SAS Financial Crimes Monitor http://ServerHost:Port/SASFINCRM
SAS Model Manager Mid-Tier http://ServerHost:Port/SASDecisionManager SASServer11
SAS Forecast Server http://ServerHost:Port/SASForecastServer/Status
SAS Enterprise Miner Mid-Tier http://ServerHost:Port/SASEnterpriseMinerJWS/Status
SAS Time Series Studio Mid Tier http://ServerHost:Port/SASTimeSeriesStudioMidTier/Status
SAS Visual Analytics Hub http://ServerHost:Port/SASVisualAnalyticsHub SASServer12
SAS Visual Analytics Designer http://ServerHost:Port/SASVisualAnalyticsDesigner
SAS Visual Analytics Viewer http://ServerHost:Port/SASVisualAnalyticsViewer
SAS Visual Data Builder http://ServerHost:Port/SASVisualDataBuilder
SAS Visual Analytics Graph Builder http://ServerHost:Port/SASVisualAnalyticsGraphBuilder
SAS Visual Analytics Administrator http://ServerHost:Port/SASVisualAnalyticsAdministrator
Search Interface to SAS Content 3.3 http://ServerHost:Port/SASSearchService

GlassFish common tasks

GlassFish is an open source java application server. here are some hint about common tasks:

Defaults:

password: changeit

admin port: 4848

HTTP port: 8080

HTTPS Port: 8181

Message Queue port: 7676

 

To Start/stop the default domain:

asadmin  start-domain

asadmin stop-domain

To list domains

asadmin list-domain

 

To Start javadb:

asadmin start-database -dbhome  install-dir/javadb.

 

IBM Industry Models

IBM Industry models:

IBM has a set of Industry Models that address key business areas within these industries, It covers over 80% of the business requirement and can be extended and customized to support specific set of requirements.

It covers many Industries: Banking, Insurance, Financial Market, Teleco, Retail and Health Care.

Each Industry model consists of

–          Business Terms

–          Business Reporting Requirements

–          Data Model

–          Process and Service Models (for Banking and Insurance)

Industry models cover specific cells in IBM Information Framework as shown below, where X is to be replaced with the industry abbreviation

X Service Data Model: Enterprise Wide Vocabulary, with reference to the IFW it contains 2 levels A and B

Level A (Scope Level) this is industry neutral, it identify the 9 data concepts that can be used to identify any piece of the industry information.

Level B (Business Level) is Application Area Independent but specific to the related Industry. It extends each data concept in Level A with 3 Hierarchy

–          Classification Hierarchy:

  • -Concept iS …
  • Each concept can be expanded into a set of subtypes
  • It will form the fundamental Entities (the backbone or the Skelton) of the generated Entity Relationship Model

–          Descriptor Hierarchy:

  • Concept Has …
  • Each concept has a set of properties which are identified by descriptor Type
  • It will form the Attributive Entities of the generated ER model (One to Many Relationship).

–          Relationship

  • Concept can Do
  • Each concept can has a set of “constrained” relationship with other concepts including itself
  • It will form the Associative Entities of the generated ER model (Many to Many Relationship).

 

Business Solution Template: define the reporting requirements; it is used to define the data mart or the OLAPdata elements that will be required by external Applications

Application Solution Template: define the data elements that will be required by external Applications

Enterprise Architecture

Enterprise Architecture is part of the Strategy planning and it provides the link between Enterprise strategy (Wide scope) and Tactical implementation (Narrow scope) that allows for enhancing the accuracy, efficiency and effectiveness of these tactical implementations.

It consists of 3 main components

  1. Architecture Models and Principles
    •  Business Architecture
    • IT Architecture
      • Information System Architecture
        • Application Architecture
        • Data Architecture
      • Infrastructure Architecture
  2. Governance processes and organizations
  3. Transition Planning

During the different projects implementations, there are many ways to achieve the project objective, however which way to follow is driven from the Enterprise Architecture as it explains the overall context in which the project outcome will reside. These decisions should take into consideration the enterprise wide goals in order to minimize cost, effort, redundancy and the need to redo the same work.

Principles will help in this; I think each project should inherit the enterprise wide principles to guide its design and evolution in order to assure alignment with the global vision and to avoid having many silos.

 

Cloud Computing – Introduction

Cloud Computing is a new way for acquiring and using of computing resources, that more than 33% of CIOs cited as the most important Visionary initiative.

Main Characteristics:
On-Demand Self-Service
Visualization and resource pooling
Usage based model
Rapid elasticity

Main Delivery Methods:
SaaS:
SaaS Business Process, CRM, HR, Industry Applications
Example: SaleForce.

PaaS:
Middleware, DB, Development Tools
Example Google Cloud

IaaS:
Servers, Networking storage
Example: Amazon Webservices

Oracle Jobs

SELECT s.SID, s.serial#, s.ownerid,
s.status, s.machine, s.terminal,
s.program, s.username, p.addr,
s.LOGON_TIME
FROM v_$process p, v_$session s
WHERE ((p.addr = s.paddr ))

To check datapump jobs
==============================
Select * from dba_datapump_jobs;

To check Running Jobs
==============================
select * from dba_jobs_running;

select object_name from dba_objects where object_name like ‘%JOB%’ and object_type in (‘TABLE’,’VIEW’) ;

To check Scheduled Jobs
==============================
DBA_SCHEDULER_JOBS
DBA_SCHEDULER_JOB_ARGS
DBA_SCHEDULER_JOB_CLASSES
DBA_SCHEDULER_JOB_LOG
DBA_SCHEDULER_JOB_RUN_DETAILS
DBA_SCHEDULER_RUNNING_JOBS

Datapump commands
===============================
help –> to
KILL_JOB
ATTACH

Oracle Statistics

Statistics can be computed or estimated for a sample

ANALYZE TABLE/Index  Object_Name COMPUTE/ESTIMATE STATISTICS SAMPLE no ROWS/PERCENT;

EXEC DBMS_UTILITY.analyze_schema(‘SCHEMA_NAME’,’COMPUTE/ESTIMATE’, estimate_percent/estimate_rows =>);
Oracle recommends using the DBMS_STATS package for collecting the statistics rather than using Analyze directly.

 

EXEC DBMS_STATS.gather_database_stats;
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);

EXEC DBMS_STATS.gather_schema_stats(‘SCHEMA_NAME’);
EXEC DBMS_STATS.gather_schema_stats(‘SCHEMA_NAME’, estimate_percent => 15);

 

To Compute Statistics for one partition of a table:
Analyze Table_Name Partition(Partition_Name) compute statistics;

DBA_OPTSTAT_OPERATIONS history of statistics operations performed at the schema and database level using the DBMS_STATS package.
DBA_USTATS Statistics collected on either table or index level
DBA_TAB_STATS_HISTORY history of table statistics modifications for all tables in the database. (saved only for 31 day)

Oracle Table Partition

Very Large tables and indexes can be partitioned in order to decompose the big object into smaller pieces.

This can have major impact on performance as well as maintenance and management activities.

There are mainly 3 data distribution methods

  • List Partition
  • Range Partition
  • Hash Partition

Each table can be partitioned using

  • Single Level Partition
  • Composite Partition

Each partition can be stored in different tablespace.

 

Reference Partition: This is supported in Oracle 11g, where the partitions will be based on foreign key  values.

Below are some examples on creating and administrating partitions

 

drop  TABLE TWOPARTITION;

CREATE TABLE TWOPARTITION
(
ID1    NUMBER,
Name1  VARCHAR2(100),
ID2    NUMBER,
Name2  VARCHAR2(100)
)
PARTITION BY RANGE (id1)
SUBPARTITION BY LIST (id2)
SUBPARTITION TEMPLATE
(SUBPARTITION sub_p_1 VALUES (1,2,3,4,5,6,7,8,9),
SUBPARTITION Sub_p_2 VALUES (DEFAULT)
)
(
PARTITION P1_1 VALUES LESS THAN (1000),
PARTITION P1_2 VALUES LESS THAN (MAXVALUE)
);

 

To check which tables are partitioned

Select * from user_part_tables;

To Get information about table partitions

select * from user_tab_partitions;

 

select * from user_tab_Subpartition;

User_Part_Key_Columns

User_SubPart_Key_Columns

User_Part_Col_Statistics

User_subPart_Col_Statistics

User_subpartition_templates

 

To Split one partition

alter table twopartition split partition p1_2 at (6000 ) into (partition p1_3, partition p1_rest);
alter table twopartition split partition p1_rest at (7000 ) into (partition p1_4, partition p1_rest);
alter table twopartition split partition p1_rest at (8000 ) into (partition p1_6, partition p1_rest);

To Merge 2 partitions

alter table twopartition merge partitions P1_3, P1_4 into  Partition P1_3N;
alter table twopartition merge partitions P1_6, P1_REST into  Partition P1_REST;
alter table twopartition merge partitions P1_3N, P1_REST into  Partition P1_2;

 

select * from user_tab_partitions;

you might find some tables named like BIN$SAVC these are from the recyclebin

select * from recyclebin;

to remove them use the Purge command

purge Table Partition BIN$SehH2WsDSUOowSrN47+PWw==$0;

or purge the complete recyclebin

PURGE RECYCLEBIN;

Truncate partition

ALTER TABLE twopartition
TRUNCATE PARTITION P_REST;

Select * from twopartition partition(P_REST);

PHP sample for Extracting Cognos report Query information From the xml report specification

Here is a sample PHP file that can be used to extract the Query information of a cognos report from the Cognos Content Store.

It will ask for the report xml specification file. The script can be extended to connect directly to the content store database and have a list of the reports as a master navigation list with this script as a detail.

<html>
<head>
<link type=”text/css” rel=”stylesheet” href=”mgawad.css”  />
</head>
<body>
<?php

function Selection_atr($selection,$offset)
{
echo ” <table ><tr><td>Type</td><td>name</td> <td>aggregate</td> <td>rollupAggregate</td><td>sort</td><td>expression</td> </tr> “;

foreach($selection->children() as $x)
{
echo ” <tr> “;
$att[‘Type’] = $x->getname();
foreach($x->attributes() as $a => $b)
{
switch ($a)
{
case ‘name’:
$att[‘name’] = $b;
break;
case ‘aggregate’:
$att[‘aggregate’] = $b;
break;
case ‘rollupAggregate’:
$att[‘rollupAggregate’] = $b;
break;
case ‘expression’:
$att[‘expression’] = $b;
break;
case ‘sort’:
$att[‘sort’] = $b;
break;
default:
echo “Error” . $b;
}
}
foreach($x->children() as $child)
{
if ($child->getName() == ‘expression’)
{
$att[‘expression’] = $child;
}
}
echo $offset . ”  <td>”.$att[‘Type’] .”</td>  ” .”  <td>”.$att[‘name’] .”</td>  ” . ”  <td>”.$att[‘aggregate’] .”</td>  ” . ”  <td>”.$att[‘rollupAggregate’] .”</td>  ” . ”  <td>”.$att[‘sort’] .”</td>  ” . ”  <td>”.$att[‘expression’] .”</td>  ” .”</tr>”;
}
echo ” </tr> </table>  “;
}

function detailFilters_atr($detailFilters,$offset)
{
echo ” <table >  <tr><td>use</td> <td>filterExpression</td> </tr> “;

foreach($detailFilters->children() as $x)
{
echo ” <tr> “;
foreach($x->attributes() as $a => $b)
{
switch ($a)
{
case ‘use’:
$att[‘use’] = $b;
break;
default:
echo “Error” . $b;
}
}
foreach($x->children() as $child)
{
if ($child->getName() == ‘filterExpression’)
{
$att[‘filterExpression’] = $child;
}
}
echo $offset . ”  <td>”.$att[‘use’] .”</td>  ” . ”  <td>”.$att[‘filterExpression’] .”</td>  ” .”</tr>”;
}
echo ” </tr> </table>  “;
}

function atr($x,$offset)
{
//if ($x->getName() == ‘selection’ ) { Selection_atr($x,$offset);}
//else {
echo ” <table >  <tr> <td>”.$x->getName().”</td>”;
foreach($x->attributes() as $a => $b)
{
echo $offset . ”  <td>”.$a,’=’,$b .”</td>  “;
}
echo ” </tr> </table>  “. “<br />”;;
//  }
}

function el($x,$offset)
{
$j=1;
switch ($x->getName())
{
/* case ‘query’:
echo ” <table >  <tr><td>Query</td> </tr>  <tr>”;
foreach($x->children() as $child)
{
atr($x,$offset);
el($x, $offset);
echo  “<br />”;
}
echo ” </tr> </table>  “. “<br />”;
break;
*/
case ‘selection’:
Selection_atr($x,$offset);
break;
case ‘detailFilters’:
echo $offset . $x->getName() . “: ” . $x . “<br />”;
detailFilters_atr($x,$offset);
break;
default:
foreach($x->children() as $child)
{
//echo $offset . $child->getName() . “: ” . $child . “<br />”;
atr($child,$offset);
el($child, $offset);
echo  “<br />”;
}
}
}
// main

$xml = simplexml_load_file($_GET[“report_file”]);

echo $xml->getName() .  “<br />”;
$I = 1;

foreach($xml->children() as $child)
{

if ($child->getName() == ‘queries’) {
echo $child->getName() . “: ” . $child .  “<br />”;
atr($child,”);
el($child,”);
}
}
?>
</body>

Using Oracle Data Pump for backup and restore

Examples

expdp username/password full=n schemas=Schema_Name directory=Data_Pump_Dir dumpfile=dumpfilename logfile=logfilename job_name=jobname

During the import, we can import into another tablespace and schema useing REMAP_TABLESPACE and REMAP_SCHEMA

impdp username/password REMAP_TABLESPACE=Source_TS:Target_TS REMAP_SCHEMA=Source_Schema:Target_Schema full=n  schemas=Schemaname directory=Data_Pump_Dir dumpfile=dumpfilename logfile=logfilename job_name=jobname