Feed aggregator

Oracle Offline Persistence Toolkit - Reacting to Replay Conflict

Andrejus Baranovski - 6 hours 5 min ago
This is next post related to Oracle Offline Persistence Toolkit. Check my previous writing on same subject - Implementing Handle Patch Method in JET Offline Toolkit. Read more about toolkit on GitHub repo.

When application goes online, we call synchronisation method. If at least one of the requests fails, then synchronisation is stopped and error callback is invoked, where we can handle failure. In error callback, we check if failure is related to the conflict - then we open dialog, where user will decide what to do (to force client changes or take server changes). Reading latest change indicator value from response in error callback (to apply it, if user decides to force client changes in the next request):


Dialog is simple - it displays dynamic text for conflicted value and provides user with a choice of actions:


Let's see how it works.

User A editing value Lex and saving it to backend:


User B is offline, editing same value B and saving it in local storage:


We can check it in the log - changes value was stored in local storage:


When going online, pending requests logged offline, will be re-executed. Obviously above request will fail, because same value was changed by another user. Conflict will be reported:


PATCH operation fails with conflict code 409:


User will be asked - how to proceed. To apply changes and override changes in the backend, or on opposite take changes from the backend and bring them to the client:


I will explain how to implement these actions in my next post. In the meantime you can study complete application available on GitHub repo.

[BLOG] Commonly Asked Questions Oracle GoldenGate 12c

Online Apps DBA - 6 hours 35 min ago

Visit: https://k21academy.com/goldengate26 and learn about: ✔Types of replication in Goldengate ✔Difference between classic & integral extract process ✔What is the datapump in Goldengate & much more… Leave a comment if you have any question related to Oracle Goldengate Visit: https://k21academy.com/goldengate26 and learn about: ✔Types of replication in Goldengate ✔Difference between classic & integral extract process […]

The post [BLOG] Commonly Asked Questions Oracle GoldenGate 12c appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Connect Power BI to GCP BigQuery using Simba Drivers

Ittichai Chammavanijakul - Fri, 2018-09-21 21:56

Power BI can connect to GCP BigQuery through its provided connector. However, some reported that they’ve encountered the refresh failure as seen below. Even though the error message suggests that the quota for the API requests per user per minute may be exceeded, some reported that the error still occurs even if with a small dataset is being fetched.

In my case, by simply disabling parallel loading table (Options and settings > Options > Data Load), I no longer see this issue. However, some still said it did not help.

An alternative option is to use another supported ODBC or JDBC driver from Simba Technologies Inc. which is partnered with Google.

Setup

  • Download the latest 64-bit ODBC driver from here.
  • Install it on the local desktop where Power BI Desktop is installed. We will have to install the same driver on the Power BI Gateway Server if the published report needs to be refreshed on Power BI Service.

Configuration

  • From Control Panel > Administrator > ODBC Data Source Administrator > System DSN, click Configure on the Google BigQuery.
  • Follow the instructions from the screens below.

When connecting on Power BI, Get Data > choose ODBC.

Categories: DBA Blogs

RMAN-03002: ORA-19693: backup piece already included

Michael Dinh - Fri, 2018-09-21 18:36

I have been cursed trying to create 25TB standby database.

Active duplication using standby as source failed due to bug.

Backup based duplication using standby as source failed due to bug again.

Now performing traditional restore.

Both attempts failed with RMAN-20261: ambiguous backup piece handle

RMAN> list backuppiece '/bkup/ovtdkik0_1_1.bkp';
RMAN> change backuppiece '/bkup/ovtdkik0_1_1.bkp' uncatalog;

What’s in the backup?

RMAN> spool log to /tmp/list.log
RMAN> list backup;
RMAN> exit

There are 2 identical backuppiece and don’t know how this could have happened.

$ grep ovtdkik0_1_1 /tmp/list.log
    201792  1   AVAILABLE   /bkup/ovtdkik0_1_1.bkp
    202262  1   AVAILABLE   /bkup/ovtdkik0_1_1.bkp

RMAN> delete backuppiece 202262;

Restart restore and is running again.

PeopleTools 8.57 is Available on the Oracle Cloud

PeopleSoft Technology Blog - Fri, 2018-09-21 15:17

We are pleased to announce that PeopleTools 8.57 is generally available for install and upgrade on the Oracle Cloud.  As we announced earlier, PeopleTools 8.57 will initially be available only on the Oracle Cloud.  We plan to make PeopleTools 8.57 available for on-premises downloads with the 8.57.04 CPU patch in January 2019.  

There are many new exiting features in PeopleTools 8.57 including:

  • The ability for end-users to set up conditions in analytics that if met will notify the user
  • Improvements to the way Related Content and Analytics are displayed
  • Add custom fields to Fluid pages with minimum life-cycle impact
  • More capabilities for end user personalization
  • Improved search that supports multi-facet selections
  • Easier than ever to brand the application with your corporate colors and graphics
  • Fluid page preview in AppDesigner and improved UI properties interface
  • End-to-end command-line support for life-cycle management processes
  • And much more!

You’ll want to get all the details and learn about the new features in 8.57.  A great place to start is the PeopleTools 8.57 Highlights Video  posted on the PeopleSoft YouTube channel.  The highlights video gives you a overview of the new features and shows how to use them.

There is plenty more information about the release available today.  Here are some links to some of the other places you can go to learn more about 8.57:

In addition to releasing PeopleTools 8.57, version 7 of PeopleSoft Cloud Manager is also being released today.  CM 7 is similar in functionality to CM 6 with additional support for PeopleTools 8.57.  If you currently use a version of Cloud Manager you must upgrade to version 7 in order to install PT 8.57. 

There are a lot of questions about how to get started using PeopleTools 8.57 and Cloud Manager 7.  Documentation and installation instructions are available on the Cloud Manager Home Page.

More information will be published over the next couple of weeks to help you get started with 8.57 on the cloud.  Additional information will include blogs to help with details of the installation, an video that shows the complete process from creating a free trial account to running PT8.57, and a detailed Spotlight Video that describes configuring OCI and Cloud Manager 7.

PeopleTools 8.57 is a significant milestone for Oracle, making it easier than ever for customers to use, maintain and run PeopleSoft Applications.

OAC 18.3.3: New Features

Rittman Mead Consulting - Fri, 2018-09-21 07:58
 New Features

I believe there is a hidden strategy behind Oracle's product release schedule: every time I'm either on holidays or in a business trip full of appointments a new version of Oracle Analytics Cloud is published with a huge set of new features!

 New Features

OAC 18.3.3 went live last week and contains a big set of enhancements, some of which were already described at Kscope18 during the Sunday Symposium. New features are appearing in almost all the areas covered by OAC, from Data Preparation to the main Data Flows, new Visualization types, new security and configuration options and BIP and Essbase enhancements. Let's have a look at what's there!

Data Preparation

A recurring theme in Europe since last year is GDPR, the General Data Protection Regulation which aims at protecting data and privacy of all European citizens. This is very important in our landscape since we "play" with data on daily basis and we should be aware of what data we can use and how.
Luckily for us now OAC helps to address GDPR with the Data Preparation Recommendations step: every time a dataset is added, each column is profiled and a list of recommended transformations is suggested to the user. Please note that Data Preparation Recommendations is only suggesting changes to the dataset, thus can't be considered the global solution to GDPR compliance.
The suggestion may include:

  • Complete or partial obfuscation of the data: useful when dealing with security/user sensitive data
  • Data Enrichment based on the column data can include:
    • Demographical information based on names
    • Geographical information based on locations, zip codes

 New Features

Each of the suggestion applied to the dataset is stored in a data preparation script that can easily be reapplied if the data is updated.

 New Features

Data Flows

Data Flows is the "mini-ETL" component within OAC which allows transformations, joins, aggregations, filtering, binning, machine learning model training and storing the artifacts either locally or in a database or Essbase cube.
The dataflows however had some limitations, the first one was that they had to be run manually by the user. With OAC 18.3.3 now there is the option to schedule Data Flows more or less like we were used to when scheduling Agents back in OBIEE.

 New Features

Another limitation was related to the creation of a unique Data-set per Data Flow which has been solved with the introduction of the Branch node which allows a single Data Flow to produce multiple data-sets, very useful when the same set of source data and transformations needs to be used to produce various data-sets.

 New Features

Two other new features have been introduced to make data-flows more reusable: Parametrized Sources and Outputs and Incremental Processing.
The Parametrized Sources and Outputs allows to select the data-flow source or target during runtime, allowing, for example, to create a specific and different dataset for today's load.

 New Features

The Incremental Processing, as the name says, is a way to run Data Flows only on top of the data added since the last run (Incremental loads in ETL terms). In order to have a data flow working with incremental loads we need to:

  • Define in the source dataset which is the key column that can be used to indicate new data (e.g. CUSTOMER_KEY or ORDER_DATE) since the last run
  • When including the dataset in a Data Flow enable the execution of the Data Flow with only the new data
  • In the target dataset define if the Incremental Processing replaces existing data or appends data.

Please note that the Incremental Load is available only when using Database Sources.

Another important improvement is the Function Shipping when Data Flows are used with Big Data Cloud: If the source datasets are coming from BDC and the results are stored in BDC, all the transformations like joining, adding calculation columns and filtering are shipped to BDC as well, meaning there is no additional load happening on OAC for the Data Flow.

Lastly there is a new Properties Inspector feature in Data Flow allowing to check the properties like name and description as well as accessing and modifying the scheduling of the related flow.

 New Features

Data Replication

Now is possible to use OAC to replicate data from a source system like Oracle's Fusion Apps, Talend or Eloqua directly into Big Data Cloud, Database Cloud or Data Warehouse Cloud. This function is extremely useful since allows decoupling the queries generated by the analytical tools from the source systems.
As expected the user can select which objects to replicate, the filters to apply, the destination tables and columns, and the load type between Full or Incremental.

Project Creation

New visualization capabilities have been added which include:

  • Grid HeatMap
  • Correlation Matrix
  • Discrete Shapes
  • 100% Stacked Bars and Area Charts

In the Map views, Multiple Map Layers can now be added as well as Density and Metric based HeatMaps, all on top of new background maps including Baidu and Google.

 New Features

Tooltips are now supported in all visualizations, allowing the end user to add measure columns which will be shown when over a section of any graph.

 New Features

The Explain feature is now available on metrics and not only on attributes and has been enhanced: a new anomaly detection algorithm identifies anomalies in combinations of columns working in the background in asynchronous mode, allowing the anomalies to be pushed as soon as they are found.

A new feature that many developers will appreciate is the AutoSave: we are all used to autosave when using google docs, the same applies to OAC, a project is saved automatically at every change. Of course this feature can be turn off if necessary.
Another very interesting addition is the Copy Data to Clipboard: with a right click on any graph, an option to save the underline data to clipboard is available. The data can then natively be pasted in Excel.

Did you create a new dataset and you want to repoint your existing project to it? Now with Dataset replacement it's just few clicks away: you need only to select the new dataset and re-map all the columns used in your current project!

 New Features

Data Management

The datasets/dataflows/project methodology is typical of what Gartner defined as Mode 2 analytics: analysis done by a business user whitout any involvement from the IT. The step sometimes missing or hard to be performed in self-service tools is the publishing: once a certain dataset is consistent and ready to be shared, it's rather difficult to open it to a larger audience within the same toolset.
New OAC administrative options have been addressing this problem: a dataset Certification by an administrator allows a certain dataset to be queried via Ask and DayByDay by other users. There is also a dataset Permissions tab allowing the definition of Full Control, Edit or Read Only access at user or role level. This is the way of bringing the self service dataset back to corporate visibility.

 New Features

A Search tab allows a fine control over the indexing of a certain dataset used by Ask and DayByDay. There are now options to select when then indexing is executed as well as which columns to index and how (by column name and value or by column name only).

 New Features

BIP and Essbase

BI Publisher was added to OAC in the previous version, now includes new features like a tighter integration with the datasets which can be used as datasources or features like email delivery read receipt notification and compressed output and password protection that were already available on the on-premises version.
There is also a new set of features for Essbase including new UI, REST APIs, and, very important security wise, all the external communications (like Smartview) are now over HTTPS.
For a detailed list of new features check this link

Conclusion

OAC 18.3.3 includes an incredible amount of new features which enable the whole analytics story: from self-service data discovery to corporate dashboarding and pixel-perfect formatting, all within the same tool and shared security settings. Options like the parametrized and incremental Data Flows allows content reusability and enhance the overall platform performances reducing the load on source systems.
If you are looking into OAC and want to know more don't hesitate to contact us

Categories: BI & Warehousing

Clob data type error out when crosses the varchar2 limit

Tom Kyte - Fri, 2018-09-21 04:26
Clob datatype in PL/SQL program going to exception when it crosses the varchar2 limit and giving the "Error:ORA-06502: PL/SQL: numeric or value error" , Why Clob datatype is behaving like varchar2 datatype. I think clob can hold upto 4 GB of data. Pl...
Categories: DBA Blogs

Migrating Oracle 10g on Solaris Sparc to Linux RHEL 5 VM

Tom Kyte - Fri, 2018-09-21 04:26
Hi, if i will rate my oracle expertise i would give it 3/10. i just started learning oracle, solaris and linux 2months ago and was given this task to migrate. yes our oracle version is quite old and might not be supported anymore. Both platforms ...
Categories: DBA Blogs

"secure" in securefile

Tom Kyte - Fri, 2018-09-21 04:26
Good Afternoon, My question is a simple one. I've wondered why Oracle decided to give the new data type the name "securefile". Is it because we can encrypt it while before with basicfile, we couldn't encrypt the LOB? Also, why not call it "se...
Categories: DBA Blogs

Pre-allocating table columns for fast customer demands

Tom Kyte - Fri, 2018-09-21 04:26
Hello team, I have come across a strange business requirement that has caused an application team I support to submit a design that is pretty bad. The problem is I have difficulty quantifying this, so I'm going you can help me all the reasons why ...
Categories: DBA Blogs

move system datafiles

Tom Kyte - Fri, 2018-09-21 04:26
Hi Tom, When we install oracle and create the database by default (not manually) ...the system datafiles are located at a specific location .. Is is possible to move these (system tablespace datafiles) datafiles from the original location to...
Categories: DBA Blogs

how does SKIPEMPTYTRANS work?

Tom Kyte - Fri, 2018-09-21 04:26
I am wondering how does SKIPEMPTYTRANS work? when does ogg judge a transaction empty or not? if it does the judgement in the middle transction? how does ogg know it's a empty transaction? provided that it did not update mapped tables before the jud...
Categories: DBA Blogs

Upgrade Oracle Internet Directory from 11G (11.1.1.9) to 12C (12.2.1.3)

Yann Neuhaus - Fri, 2018-09-21 00:53

There is no in-place upgrade for the OID 11.1.1.9 to OID 12C 12.2.1.3. The steps to follow are the following:

  1. Install the required JDK version
  2. Install the Fusion Middleware Infrastructure 12c (12.2.1.3)
  3. Install the OID 12C (12.2.1.3) in the Fusion Middleware Infrastructure Home
  4. Upgrade the exiting OID database schemas
  5. Reconfigure the OID WebLogic Domain
  6. Upgrade the OID WebLogic Domain

1. Install JDK 1.8.131+

I have used the JDK 1.8_161

cd /u00/app/oracle/product/Java
tar xvf ~/software/jdk1.8.0_161

set JAVA_HOME and add  $JAVA_HOME/bin in the path

2. Install Fusion Middleware Infrastructure 12.2.1.3  software

I will not go into the details as this is a simple Fusion Middleware Infrastructure 12.2.1.3 software installation.
This software contains the WebLogic 12.2.1.3. Thee is no need to install a separate WebLogic software.

I used MW_HOME set to /u00/app/oracle/product/oid12c

java -jar ~/software/fmw_12.2.1.3_infrastructure.jar

3. Install OID 12C software

This part is just a software installation, you just need to follow the steps in the installation wizard

cd ~/software/
./fmw_12.2.1.3.0_oid_linux64.bin

4. Check the existing schemas:

In SQLPLUS connected as SYS run the following query

SET LINE 120
COLUMN MRC_NAME FORMAT A14
COLUMN COMP_ID FORMAT A20
COLUMN VERSION FORMAT A12
COLUMN STATUS FORMAT A9
COLUMN UPGRADED FORMAT A8
SELECT MRC_NAME, COMP_ID, OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY ORDER BY MRC_NAME, COMP_ID ;

The results:

MRC_NAME COMP_ID OWNER VERSION STATUS UPGRADED
-------------- -------------------- ------------------------------ ------------ --------- --------
DEFAULT_PREFIX    OID            ODS                  11.1.1.9.0    VALID      N
IAM               IAU            IAM_IAU              11.1.1.9.0    VALID      N
IAM               MDS            IAM_MDS              11.1.1.9.0    VALID      N
IAM               OAM            IAM_OAM              11.1.2.3.0    VALID      N
IAM               OMSM           IAM_OMSM             11.1.2.3.0    VALID      N
IAM               OPSS           IAM_OPSS             11.1.1.9.0    VALID      N
OUD               IAU            OUD_IAU              11.1.1.9.0    VALID      N
OUD               MDS            OUD_MDS              11.1.1.9.0    VALID      N
OUD               OPSS           OUD_OPSS             11.1.1.9.0    VALID      N

9 rows selected.

I have a OID 11.1.1.9 and a IAM 11.1.2.3 using the same database as repository

5. ODS Schema upgrade:

Take care to only upgrade the ODS schema and not the IAM schemas or the Internet Access Manager will not work any more.
Associated to OID 11.1.1.9, there was only the ODS schema installed, the ODS upgrade requires to create new Schemas.

cd /u00/app/oracle/product/oid12c/oracle_common/upgrade/bin/
./ua

Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-11-13-37AM.log
Reading installer inventory, this will take a few moments...
...completed reading installer inventory.

In the following, I provide the most important screen shots for the “ODS schema upgrade”

ODS schema upgrade 1

ODS schema upgrade 2
Checked the schema validity:

ODS schema upgrade 3

ODS schema upgrade 4

ODS schema upgrade 5

ODS schema upgrade 6

ODS schema upgrade 7

ODS schema upgrade 8

In SQLPLUS connected as SYS run the following query

SET LINE 120
COLUMN MRC_NAME FORMAT A14
COLUMN COMP_ID FORMAT A20
COLUMN VERSION FORMAT A12
COLUMN STATUS FORMAT A9
COLUMN UPGRADED FORMAT A8
SELECT MRC_NAME, COMP_ID, OWNER, VERSION, STATUS, UPGRADED FROM SCHEMA_VERSION_REGISTRY ORDER BY MRC_NAME, COMP_ID;

MRC_NAME       COMP_ID            OWNER               VERSION    STATUS      UPGRADED
————– —————- ——————————– ———— ——— ——–
DEFAULT_PREFIX OID                ODS                  12.2.1.3.0    VALID      Y
IAM               IAU                IAM_IAU              11.1.1.9.0    VALID      N
IAM               MDS                IAM_MDS              11.1.1.9.0    VALID      N
IAM               OAM                IAM_OAM              11.1.2.3.0    VALID      N
IAM               OMSM               IAM_OMSM             11.1.2.3.0    VALID      N
IAM               OPSS               IAM_OPSS             11.1.1.9.0    VALID      N
OID12C           IAU                OID12C_IAU           12.2.1.2.0    VALID      N
OID12C           IAU_APPEND        OID12C_IAU_APPEND    12.2.1.2.0    VALID      N
OID12C           IAU_VIEWER        OID12C_IAU_VIEWER    12.2.1.2.0    VALID      N
OID12C           OPSS               OID12C_OPSS          12.2.1.0.0    VALID      N
OID12C           STB                OID12C_STB           12.2.1.3.0    VALID      N
OID12C           WLS                OID12C_WLS           12.2.1.0.0    VALID      N
OUD               IAU                OUD_IAU              11.1.1.9.0    VALID      N
OUD               MDS                OUD_MDS              11.1.1.9.0    VALID      N
OUD               OPSS               OUD_OPSS             11.1.1.9.0    VALID      N

15 rows selected.

I named the new OID repository schemas OID12C during the ODS upgrade.

6. reconfigure the domain

cd /u00/app/oracle/product/oid12c/oracle_common/common/bin/
./reconfig.sh -log=/tmp/reconfig.log -log_prority=ALL

See screen shots “Reconfigure Domain”
Reconfigure Domain 1
Reconfigure Domain 2
Reconfigure Domain 3
Reconfigure Domain 4
Reconfigure Domain 5
Reconfigure Domain 6
Reconfigure Domain 7
Reconfigure Domain 8
Reconfigure Domain 9
Reconfigure Domain 10
Reconfigure Domain 11
Reconfigure Domain 12
Reconfigure Domain 13
Reconfigure Domain 14
Reconfigure Domain 15
Reconfigure Domain 16
Reconfigure Domain 17
Reconfigure Domain 18
Reconfigure Domain 19
Reconfigure Domain 20
Reconfigure Domain 21
Reconfigure Domain 22
Reconfigure Domain 23
Reconfigure Domain 24
Reconfigure Domain 25

7. Upgrading Domain Component Configurations

cd ../../upgrade/bin/
./ua

Oracle Fusion Middleware Upgrade Assistant 12.2.1.3.0
Log file is located at: /u00/app/oracle/product/oid12c/oracle_common/upgrade/logs/ua2018-01-26-12-18-12PM.log
Reading installer inventory, this will take a few moments…

The following are the screen shots of the upgrade of the WebLogic Domain configuration

upgrade domain component configuration 1
upgrade domain component configuration 2
upgrade domain component configuration 3
upgrade domain component configuration 4
upgrade domain component configuration 5
upgrade domain component configuration 6
upgrade domain component configuration 7

8. Start the domain

For this first start I will use the normal start scripts installed when upgrading the domain in separate putty session to see the traces

Putty Session 1:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# Start the Admin Server in the first putty
./startWebLogic.sh

Putty Session 2:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
# In an other shell session start the node Manager:
./startNodeManager.sh

Putty Session 3:

cd /u01/app/OID/user_projects/domains/IDMDomain/bin
./startComponent.sh oid1

Starting system Component oid1 ...

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Reading domain from /u01/app/OID/user_projects/domains/IDMDomain

Please enter Node Manager password:
Connecting to Node Manager ...
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090905> <Disabling the CryptoJ JCE Provider self-integrity check for better startup performance. To enable this check, specify -Dweblogic.security.allowCryptoJDefaultJCEVerification=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090906> <Changing the default Random Number Generator in RSA CryptoJ from ECDRBG128 to HMACDRBG. To disable this change, specify -Dweblogic.security.allowCryptoJDefaultPRNG=true.>
<Jan 26, 2018 1:02:08 PM CET> <Info> <Security> <BEA-090909> <Using the configured custom SSL Hostname Verifier implementation: weblogic.security.utils.SSLWLSHostnameVerifier$NullHostnameVerifier.>
Successfully Connected to Node Manager.
Starting server oid1 ...
Successfully started server oid1 ...
Successfully disconnected from Node Manager.

Exiting WebLogic Scripting Tool.

Done

The ODSM application is now deployed in the WebLogic Administration Server and the WLS_ODS1 WebLogic Server from the previous OID 11C  administration domain is not used any more.

http://host01.example.com:7002/odsm

7002 is the Administration Server port for this domain.

 

Cet article Upgrade Oracle Internet Directory from 11G (11.1.1.9) to 12C (12.2.1.3) est apparu en premier sur Blog dbi services.

Don’t Drop Your Career Using Drop Database

Michael Dinh - Thu, 2018-09-20 22:12

I first learned about drop database in 2007.

Environment contains standby database oltpdr.
Duplicate standby database olapdr on the same host using oltpdr as source failed during restore phase.
Clean up data files from failed olapdr duplication.

Check database olapdr.
olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select count(*) from gv$session;

  COUNT(*)
----------
        90

Elapsed: 00:00:00.00
olap1> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

Elapsed: 00:00:00.03
olap1> startup force mount restrict exclusive;
ORACLE instance started.

Total System Global Area 2.5770E+10 bytes
Fixed Size                  6870952 bytes
Variable Size            5625976920 bytes
Database Buffers         1.9998E+10 bytes
Redo Buffers              138514432 bytes
Database mounted.

olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select count(*) from gv$session;

  COUNT(*)
----------
        92

Elapsed: 00:00:00.01
olap1> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

Elapsed: 00:00:00.04
At this point, I was ready to run drop database and somehow an angel was watching over me and I decided to check v$datafile.
olap1> select name from v$datafile where rownum < 10;

NAME
-----------------------------------------------------------
+DATA/OLTPDR/DATAFILE/system.4069.986394171
+DATA/OLTPDR/DATAFILE/dev_odi_temp.4067.986394187
+DATA/OLTPDR/DATAFILE/sysaux.4458.985845085
+DATA/OLTPDR/DATAFILE/big_dmstaging_data_new_2.4687.986498821
+DATA/OLTPDR/DATAFILE/account_toll_index.3799.985714921
+DATA/OLTPDR/DATAFILE/users.2524.985777377
+DATA/OLTPDR/DATAFILE/dev_ias_temp.4141.985846937
+DATA/OLTPDR/DATAFILE/dev_stb.4143.985846937
+DATA/OLTPDR/DATAFILE/dev_odi_user.4144.985846937

9 rows selected.

Elapsed: 00:00:00.01

olap1> exit
Strange data files are the same for source and target.
oltp1> select open_mode from v$database;

OPEN_MODE
--------------------
READ ONLY WITH APPLY

Elapsed: 00:00:00.07
oltp1> select name from v$datafile where rownum < 10;

NAME
-----------------------------------------------------------
+DATA/OLTPDR/DATAFILE/system.4069.986394171
+DATA/OLTPDR/DATAFILE/dev_odi_temp.4067.986394187
+DATA/OLTPDR/DATAFILE/sysaux.4458.985845085
+DATA/OLTPDR/DATAFILE/big_dmstaging_data_new_2.4687.986498821
+DATA/OLTPDR/DATAFILE/account_toll_index.3799.985714921
+DATA/OLTPDR/DATAFILE/users.2524.985777377
+DATA/OLTPDR/DATAFILE/dev_ias_temp.4141.985846937
+DATA/OLTPDR/DATAFILE/dev_stb.4143.985846937
+DATA/OLTPDR/DATAFILE/dev_odi_user.4144.985846937

9 rows selected.

Elapsed: 00:00:00.01
oltp1> exit
Check data files from ASM.
ASMCMD> cd DATA
ASMCMD> ls
OLAPDR/
OLTP/
OLTPDR/
SCHDDBDR/
_MGMTDB/

ASMCMD> cd OLAPDR
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ASMCMD> cd DATAFILE
ASMCMD> pwd
+DATA/OLAPDR/DATAFILE
ASMCMD> exit
Shutdown olapdr.
olap1> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_name                              string      oltp
db_unique_name                       string      olapdr

olap1> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

Elapsed: 00:00:00.03
olap1> shut abort;
ORACLE instance shut down.
olap1> exit
Manually remove data files from ASM.
$ asmcmd lsof -G +DATA|grep -ic OLAPDR
0
$ asmcmd ls +DATA/OLAPDR/DATAFILE|wc -l
1665
$ asmcmd lsof -G +DATA/OLAPDR/DATAFILE|wc -l
0
$ asmcmd
ASMCMD> cd datac1
ASMCMD> cd olapdr
ASMCMD> ls
CONTROLFILE/
DATAFILE/
ASMCMD> cd datafile
ASMCMD> pwd
+DATA/olapdr/datafile
ASMCMD> rm *
You may delete multiple files and/or directories.
Are you sure? (y/n) y

What would have happened if drop database was executed?
Does anyone know for sure?
Would you have executed drop database?

Differences Between Validate Preview [Summary]

Michael Dinh - Thu, 2018-09-20 19:44

Summary is equivalent to – list backup of database summary versus list backup of database.

RMAN> restore database validate preview summary from tag=stby_dup;

Starting restore at 20-SEP-2018 21:19:48
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=33 instance=hawk1 device type=DISK


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
119     B  0  A DISK        18-SEP-2018 13:56:33 1       1       NO         STBY_DUP
using channel ORA_DISK_1

RMAN> restore database validate preview from tag=stby_dup;

Starting restore at 20-SEP-2018 21:18:44
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=33 instance=hawk1 device type=DISK


List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ --------------------
119     Incr 0  1.34G      DISK        00:00:15     18-SEP-2018 13:56:33
        BP Key: 121   Status: AVAILABLE  Compressed: NO  Tag: STBY_DUP
        Piece Name: /tmp/HAWK_djtde1c3_1_1.bkp
  List of Datafiles in backup set 119
  File LV Type Ckp SCN    Ckp Time             Name
  ---- -- ---- ---------- -------------------- ----
  1    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/system.306.984318067
  2    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/sysaux.307.984318067
  3    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs1.309.984318093
  4    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/users.310.984318093
  5    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs2.311.984318095
  6    0  Incr 6038608    18-SEP-2018 13:56:19 +DATA/hawkb/datafile/undotbs3.312.984318095
using channel ORA_DISK_1

Following is the same for both.
RMAN> restore database validate preview summary from tag=stby_dup;
RMAN> restore database validate preview from tag=stby_dup;

List of Archived Log Copies for database with db_unique_name HAWKB
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
849     1    506     A 18-SEP-2018 13:55:08
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_506.551.987170199

852     1    507     A 18-SEP-2018 13:56:39
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_507.552.987199227

856     1    508     A 18-SEP-2018 22:00:26
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_508.554.987220639

860     1    509     A 19-SEP-2018 03:57:18
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_509.556.987258729

862     1    510     A 19-SEP-2018 14:32:07
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_510.557.987285627

864     1    511     A 19-SEP-2018 22:00:27
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_511.558.987287235

868     1    512     A 19-SEP-2018 22:27:15
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_512.560.987325879

872     1    513     A 20-SEP-2018 09:11:18
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_513.562.987364831

847     2    173     A 18-SEP-2018 13:55:08
        Name: +FRA/hawkb/archivelog/2018_09_18/thread_2_seq_173.550.987170199

854     2    174     A 18-SEP-2018 13:56:38
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_174.553.987210305

858     2    175     A 19-SEP-2018 01:05:05
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_175.555.987253211

866     2    176     A 19-SEP-2018 13:00:10
        Name: +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_176.559.987287239

870     2    177     A 19-SEP-2018 22:27:18
        Name: +FRA/hawkb/archivelog/2018_09_20/thread_2_seq_177.561.987328815

Media recovery start SCN is 6038608
Recovery must be done beyond SCN 6038608 to clear datafile fuzziness

channel ORA_DISK_1: starting validation of datafile backup set
channel ORA_DISK_1: reading from backup piece /tmp/HAWK_djtde1c3_1_1.bkp
channel ORA_DISK_1: piece handle=/tmp/HAWK_djtde1c3_1_1.bkp tag=STBY_DUP
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: validation complete, elapsed time: 00:00:08
using channel ORA_DISK_1

channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_506.551.987170199
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_1_seq_507.552.987199227
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_508.554.987220639
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_509.556.987258729
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_510.557.987285627
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_1_seq_511.558.987287235
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_512.560.987325879
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_1_seq_513.562.987364831
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_18/thread_2_seq_173.550.987170199
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_174.553.987210305
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_175.555.987253211
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_19/thread_2_seq_176.559.987287239
channel ORA_DISK_1: scanning archived log +FRA/hawkb/archivelog/2018_09_20/thread_2_seq_177.561.987328815
Finished restore at 20-SEP-2018 21:20:11

RMAN>

New File Adapter - Native File Storage

Anthony Shorten - Thu, 2018-09-20 17:59

In Oracle Utilities Application Framework V4.3.0.6.0, a new File Adapter has been introduced to parameterize locations across environments. In previous releases, environment variables or path's where hard coded to implement locations of files.

With the introduction of the Oracle Utilities Cloud SaaS Services, the location of files are standardized and to reduce maintenance costs, these paths are not parameterized using an Extendable Lookup (F1-FileStorage) defining the path alias and the physical location. The on-premise version of the Oracle Utilities Application Framework V4.3.0.6.0 supports local storage (including network storage) using this facility. The Oracle Utilities Cloud SaaS version supports both local (predefined) and Oracle Object Storage Cloud.

For example:

Example Lookup

To use the alias in any FILE-PATH (for example) the URL is used in the FILE-PATH:

file-storage://MYFILES/mydirectory  (if you want to specify a subdirectory under the alias)

or

file-storage://MYFILES

Now, if you migrate to another environment (the lookup is migrated using Configuration Migration Assistant) then this record can be altered. If you are moving to the Cloud then this adapter can change to Oracle Object Storage Cloud. This reduces the need to change individual places that uses the alias.

It is recommended to take advantage of this capability:

  • Create an alias per location you read or write files from in your Batch Controls. Define it using the Native File Storage adapter. Try and create the minimum number of alias as possible to reduce maintenance costs.
  • Change all the FILE-PATH parameters in your batch controls to the use the relevant file-storage URL.

If you decide to migrate to the Oracle Utilities SaaS Cloud, these Extensable Lookup values will be the only thing that changes to realign the implementation to the relevant location on the Cloud instance. For on-premise implementation and the cloud, these definitions are now able to be migrated using Configuration Migration Assistant.

Oracle 12.2 : Windows Virtual Account

Yann Neuhaus - Thu, 2018-09-20 09:51

With Oracle 12.2 we can use a Virtual Account during the Oracle installation on Windows. Virtual Accounts allow you to install an Oracle Database and, create and manage Database services without passwords. A Virtual Account can be used as the Oracle Home User for Oracle Database Single Instance installations and does not require a user name or password during installation and administration.
In this blog I want to share an experience I had with the Windows Virtual Accounts when installing Oracle.
I was setting an Oracle environment on Windows Server 2016 for a client. During The installation I decided to use the Virtual Account option.
Capture1
After the installation of Oracle, I created a database PROD. And everything was fine

SQL*Plus: Release 12.2.0.1.0 Production on Wed Sep 19 05:43:05 2018

Copyright (c) 1982, 2016, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
PROD      READ WRITE

SQL>

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      C:\APP\ORACLE\PRODUCT\12.2.0\D
                                                 BHOME_1\DATABASE\SPFILEPROD.ORA
                                                
SQL>

Looking into the properties of my spfile I can see that there is a Windows group named ORA_OraDB12Home1_SVCACCTS
namedgroup
which has full control of the spfile. Indeed as we used the virtual account to install the Oracle software, oracle will automatically create this group and will use it for some tasks
Capture2
After the first database, the client asked to create a second database. Using DBCA I created a second let’s say ORCL.
After the creation of ORCL, I changed some configuration parameters of the first database PROD and decide to restart it. And then I was surprised with the following error.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file 'C:\APP\ORACLE\PRODUCT\12.2.0\DBHOME_1\DATABASE\INITPROD.ORA'
SQL>

Waw!! What happened is that when using DBCA to create the second database ORCL, Oracle changed the properties of the spfile of the first database PROD (spfilePROD.ora). Yes it’s strange but this was exactly what happened. The Virtual Group was replaced by OracleServiceORCL
Capture3

At the other side The ORCL spfile was fine.
Capture4

So I decided to remove the OracleServiceORCL in the properties of the PROD spfile and I add back the Virtual Group
Capture5

And Then I was able to start the PROD database

SQL> startup
ORACLE instance started.

Total System Global Area  524288000 bytes
Fixed Size                  8748760 bytes
Variable Size             293601576 bytes
Database Buffers          213909504 bytes
Redo Buffers                8028160 bytes
Database mounted.
Database opened.
SQL>

But this issue means that every time I create a new database with DBCA the properties of spfiles of others databases may be changed and this is not normal.
When checking for this strange issue I found this Oracle Support note
DBCA Using Virtual Account Incorrectly Sets The SPFILE Owner (Doc ID 2410452.1)

So I decided to apply the recommended patches by Oracle
Oracle Database 12.2.0.1.180116BP
26615680

C:\Users\Administrator>c:\app\oracle\product\12.2.0\dbhome_1\OPatch\opatch lspatches
26615680;26615680:SI DB CREATION BY DBCA IN VIRTUAL ACCOUNT INCORRECTLY SETS THE ACL FOR FIRST DB
27162931;WINDOWS DB BUNDLE PATCH 12.2.0.1.180116(64bit):27162931

And Then I create a new database TEST to see if the patches have corrected the issue.
Well I was able to restart all databases without any errors. But looking into the properties of the 3 databases, we can see that the patch added back the Virtual Group but the service of the last database is still present for previous databases. I don’t really understand why OracleServiceTest should be present in spfilePROD.ora and spfileORCL.ora.

Capture6

Capture7

Capture8

Conclusion : In this blog I shared an issue I experienced with Windows Virtual Account. Hope that this will help.

 

Cet article Oracle 12.2 : Windows Virtual Account est apparu en premier sur Blog dbi services.

Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud HCM Suites for Midmarket and Large Enterprises

Oracle Press Releases - Thu, 2018-09-20 07:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Cloud HCM Suites for Midmarket and Large Enterprises Oracle Placed Furthest for Completeness of Vision within the entire Gartner Magic Quadrant

REDWOOD SHORES, Calif. —Sep 20, 2018

Oracle today announced that it has been recognized, for the third consecutive year, as a Leader in Cloud HCM Suites for Midmarket and Large Enterprises by Gartner. The 2018 Gartner Magic Quadrant for Cloud HCM Suites for Midmarket and Large Enterprises evaluates vendors based on completeness of vision and ability to execute. It positioned Oracle furthest for completeness of vision for Cloud HCM Suites. A complimentary copy of the report is available here.

“Our strong investment in a simple, powerful HCM system, and innovation in artificial intelligence and digital assistants, will forever change the experience of working with HCM systems,” said Chris Leone, senior vice president of development, Oracle HCM Cloud. “We are very pleased to be recognized by Gartner and believe our position as a Leader in this year’s report further validates our relentless commitment to helping customers gain a competitive advantage while adapting to the ever-accelerating pace of technological change.”

According to Gartner, “Leaders demonstrate a market-defining vision of how HCM technology can help HR leaders achieve business objectives. Leaders have the ability to execute against that vision through products and services, and have demonstrated solid business results in the form of revenue and earnings. In the cloud HCM suite market, Leaders show a consistent ability to win deals, including the foundational elements of admin HR (with a large number of country-specific HR localizations) and high attach rates of Talent Management, Workforce Management and HRSD capabilities. They have multiple proof points of successful implementations. Further, these customers have workforces deployed in more than one of the main geographic regions (North America, Europe, MENA, Latin America and Asia/Pacific), in a wide variety of vertical industries and sizes of organization (by number of employees). Leaders are often what other providers in the market measure themselves against.”

Part of Oracle Cloud Applications, Oracle HCM Cloud enables HR professionals to simplify the complex in order to meet the increasing expectations of an ever-changing workforce and business environment. By providing a complete and powerful platform that spans the entire employee life cycle, Oracle HCM Cloud helps HR professionals deliver superior employee experience, align people strategy to evolving business priorities, and cultivate a culture of continuous innovation.

For additional information on Oracle HCM Cloud visit: https://cloud.oracle.com/en_US/hcm-cloud.

Gartner, Magic Quadrant for Cloud HCM Suites for Midmarket and Large Enterprises, Melanie Lougee, Ranadip Chandra, et al., 15 August 2018.

Contact Info
Simon Jones
Oracle PR
415-202-4574
s.jones@oracle.com
Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Simon Jones

  • 415-202-4574

Object Erasure capability introduced in 4.3.0.6.0

Anthony Shorten - Wed, 2018-09-19 17:45

With data privacy regulations around the world being strengthened data management principles need to be extended to most objects in the product. In the past, Information Lifecycle Management (ILM) was introduced for transaction object management and is continued to be used today in implementations for effective data management. When designing the ILM capability, it did not make sense to extend it to be used for Master data such as Account, Persons, Premises, Meters, Assets, Crews etc as data management and privacy rules tend to be different for these types of objects.

In Oracle Utilities Application Framework V4.3.0.6.0, we have introduced Object Erasure to support Master Data and take into account purging as well as obfuscation of data. This new capability is complementary to Information Lifecycle Management to offer full data management capability. This new capability does not replace Information Lifecycle Management or depends on Information Lifecycle Management being licensed. Customers using Information Lifecycle Management in conjunction with Object Erasure can implement full end to end data management capabilities.

The idea behind Object Erasure is as follows:

  • Any algorithm can call the Manage Erasure algorithm on the associated Maintenance Object to check for the conditions to ascertain that the object is eligible for object erasure. This is flexible to allow implementations to have the flexibility to initiate the process from a wide range of possibilities. This can be as simple as checking some key fields or some key data on an object (you decide the criteria). The Manage Erasure algorithm is used to detect the conditions, collate relevant information and call the F1-ManageErasureSchedule Business Service to create an Erasure Schedule Business Object in a Pending state to initiate the process. A set of generic Erasure Schedule Business Objects is provided (for example, a generic Purge Object for use in Purging data) and you can create your own to record additional information.
  • The Erasure Schedule BO has three states which can be configured with algorithms (usually Enter Algorithms, a set are provided for reuse with the product).
    • Pending - This is the initial state of the erasure
    • Erased - This is the most common final state indicating the object has been erased or been obfuscated.
    • Discarded - This is an alternative final state where the record can be parked (for example, if the object becomes eligible, an error has occurred in the erasure or reversal of obfuscation is required).
  • A new Erasure Monitor (F1-OESMN) Batch Control can be used to transition the Erasure Schedule through its states and perform the erasure or obfuscation activity.

Here is a summary of this processing:

Erasure Flow

Note: The base supplied Purge Enter algorithm (F1-OBJERSPRG) can be used for most requirements. It should be noted that it does not remove the object from the _K Key tables to avoid conflicts when reallocating identifiers.

The solution has been designed with a portal to link all the element together easily and the product comes with a set of pre-defined objects ready to use. The portal also allows an implementer to configure Erasure Days which is effectively the number of days the record remains in the Erasure Schedule before being considered by the Erasure Monitor (a waiting period basically).

Erasure Configuration

As an implementer you can just build the Manage Erasure algorithm to detect the business event or you can also write the algorithms to perform all of the processing (and every variation in between). The Erasure will respect any business rules configured for the Maintenance Object so the erasure or obfuscation will only occur if the business rules permit it.

Customers using Information Lifecycle Management can manage the storage of Erasure Schedule objects using Information Lifecycle Management.

Objects Provided

The Object Erasure capability supplies a number of objects you can use for your implementation:

  • Set of Business Objects. A number of Erasure Schedule Business Objects such as F1-ErasureScheduleRoot (Base Object), F1-ErasureScheduleCommon (Generic Object for Purges) and F1-ErasureScheduleUser (for user record obfuscation). Each product may ship additional Business Objects.
  • Common Business Services. A number of Business Services including F1-ManageErasureSchedule to use within your Manage Erasure algorithm to create the necessary Erasure Schedule Object.
  • Set of Manage Erasure Algorithms. For each predefined Object Erasure object provided with the product, a set of Manage Erasure algorithms are supplied to be connected to the relevant Maintenance Object.
  • Erasure Monitor Batch Control. The F1-OESMN Batch Control provided to manage the Erasure Schedule Object state transition.
  • Enter Algorithms. A set of predefined Enter algorithms to use with the Erasure Schedule Object to perform common outcomes including Purge processing.
  • Erasure Portal. A portal to display and maintain the Object Erasure configuration.
Refer to the online documentation for further advice on Object Erasure.

In-Database Archiving

Tom Kyte - Wed, 2018-09-19 15:46
Hi, Currently i am using list partitioning based on a status column to classify the data as ACTIVE and EXPIRED. And then the corresponding partitions are exported and then dropped from Prod. The problem with this approach is the internal data m...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator