|
楼主 |
发表于 2015-3-31 23:57:44
|
显示全部楼层
查询mos 发现很可能是bug 15892456
Hdr: 15892456 11.2.0.2.5 RDBMS 11.2.0.2.5 DATAGUARD_PSBY PRODID-5 PORTID-226
Abstract: MRP IS NOT APPLYING REDO (MRP0, PR00 AND PRXX SEEMS IDLE FOR A LONG TIME)
*** BPADMWW 11/19/12 10:06 am *** (ADD: Impact/Symptom->OTHER - FEATURE UNUS...)
*** CGREY 11/19/12 10:06 am ***
@ TAR:
@ ----
@ SR 3-6327164354
BUG TYPE CHOSEN
===============
Code
Component: RDBMS
================
DETAILED PROBLEM DESCRIPTION
============================
11.2.0.5 primary 3 node RAC and standby 2 node RAC
1. edit database provprr set state = 'APPLY-OFF' from DGMGRL hangs for
many hours
2. after instance 1 of the primary crashes the standby does not apply logs
already shipped from the primary.
Oct 16 MRP hang customer turned on log_archive_trace for 3 minutes and it
did not have output.
System state dump happened in the PROVPRR2_mrp0_10613.trc
@ EXPLAIN WHY YOU BELIEVE THIS TO BE A PRODUCT DEFECT
@ ===================================================
@ MRP should not stop applying and hang
@
DIAGNOSTIC ANALYSIS
===================
alert_PROVPRD1_primary.log primary instance 1 alert log show crash.
Current sequences are 31 and 32
Thu Oct 11 14:08:18 2012
LGWR: Standby redo logfile selected to archive thread 1 sequence 32
LGWR: Standby redo logfile selected for thread 1 sequence 32 for destination
LOG_ARCHIVE_DEST_1
Thread 1 advanced to log sequence 32 (LGWR switch)
Current log# 4 seq# 32 mem# 0: +DATA_RAID1_PROVPRD/provprd/redo04a.log
Current log# 4 seq# 32 mem# 1: +DATA_RAID1_PROVPRD/provprd/redo04b.log
Thu Oct 11 14:08:36 2012
Archived Log entry 2188 added for thread 1 sequence 31 ID 0x302ac18 dest 2:
Thu Oct 11 14:09:47 2012
ARC4: Standby redo logfile selected for thread 1 sequence 31 for destination
LOG_ARCHIVE_DEST_1
Thu Oct 11 14:10:09 2012
ORA-16198: LGWR received timedout error from KSR
LGWR: Attempting destination LOG_ARCHIVE_DEST_1 network reconnect (16198)
LGWR: Destination LOG_ARCHIVE_DEST_1 network reconnect abandoned
Error 16198 for archive log file 4 to 'provprr'
Thu Oct 11 14:11:24 2012
...
Thu Oct 11 14:19:16 2012
Timed out trying to start process m001.
Thu Oct 11 14:19:19 2012
Errors in file
/app/oracle/diag/rdbms/provprd/PROVPRD1/trace/PROVPRD1_arc4_27890.trc
(incident=5045993):
ORA-240: control file enqueue held for more than 120 seconds
Incident details in:
/app/oracle/diag/rdbms/provprd/PROVPRD1/incident/incdir_5045993/PROVPRD1_arc4
_27890_i5045993.trc
Thu Oct 11 14:19:22 2012
...
opiodr aborting process unknown ospid (30237) as a result of ORA-609
Thu Oct 11 14:20:26 2012
PMON failed to acquire latch, see PMON dump
Thu Oct 11 14:21:29 2012
PMON failed to acquire latch, see PMON dump
Thu Oct 11 14:22:21 2012
...
opiodr aborting process unknown ospid (30316) as a result of ORA-609
Thu Oct 11 14:18:51 2012
Errors in file
/app/oracle/diag/rdbms/provprd/PROVPRD1/trace/PROVPRD1_arc0_27871.trc
(incident=5045945):
ORA-240: control file enqueue held for more than 120 seconds
Incident details in:
/app/oracle/diag/rdbms/provprd/PROVPRD1/incident/incdir_5045945/PROVPRD1_arc0
_27871_i5045945.trc
Thu Oct 11 14:40:34 2012
Starting ORACLE instance (normal)
alert_PROVPRR2.log Standby instance 2 is the apply instance. Looks like MRP
was hung and finally not applying
Thu Oct 11 00:18:01 2012
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE THROUGH ALL
SWITCHOVER DISCONNECT USING CURRENT LOGFILE
Thu Oct 11 00:18:01 2012
Media Recovery Waiting for thread 2 sequence 17 (in transit)
Recovery of Online Redo Log: Thread 2 Group 18 Seq 17 Reading mem 0
Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby06a.log
Media Recovery Waiting for thread 1 sequence 29 (in transit)
Recovery of Online Redo Log: Thread 1 Group 13 Seq 29 Reading mem 0
Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby01a.log
Media Recovery Waiting for thread 3 sequence 25 (in transit)
Recovery of Online Redo Log: Thread 3 Group 23 Seq 25 Reading mem 0
Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby11a.log
Thu Oct 11 11:44:16 2012
Standby controlfile consistent with primary
RFS[4]: Selected log 14 for thread 1 sequence 30 dbid 32324255 branch
795712034
Thu Oct 11 11:44:16 2012
Media Recovery Waiting for thread 1 sequence 30 (in transit)
Recovery of Online Redo Log: Thread 1 Group 14 Seq 30 Reading mem 0
Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby02a.log
Media Recovery Waiting for thread 3 sequence 26 (in transit)
Recovery of Online Redo Log: Thread 3 Group 24 Seq 26 Reading mem 0
Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby12a.log
Thu Oct 11 11:44:31 2012
Archived Log entry 4717 added for thread 1 sequence 29 ID 0x302ac18 dest 2:
Thu Oct 11 14:08:36 2012
RFS[6]: Assigned to RFS process 27430
RFS[6]: Selected log 25 for thread 3 sequence 27 dbid 32324255 branch
795712034
Thu Oct 11 14:08:51 2012
Media Recovery Log
+ARCH_PROVPRD/provprr/archivelog/2012_10_11/thread_2_seq_17.4933.796399699
Media Recovery Waiting for thread 3 sequence 27 (in transit)
Recovery of Online Redo Log: Thread 3 Group 25 Seq 27 Reading mem 0
Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby13a.log
alert_PROVPRR2.log Standby apply instance 2 not applying for a day or so and
is cancelled and hangs.
Mon Oct 15 17:07:02 2012
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
Mon Oct 15 18:00:50 2012
System State dumped to trace file
/app/oracle/diag/rdbms/provprr/PROVPRR2/trace/PROVPRR2_ora_23483.trc
Mon Oct 15 18:10:34 2012
System State dumped to trace file
/app/oracle/diag/rdbms/provprr/PROVPRR2/trace/PROVPRR2_mrp0_10613.trc
Tue Oct 16 10:21:32 2012
ALTER SYSTEM SET log_archive_trace=8192 SCOPE=BOTH SID='*';
<-------------- ran tracing for about 3 minutes here
Tue Oct 16 10:24:17 2012
ALTER SYSTEM SET log_archive_trace=0 SCOPE=BOTH SID='*';
Tue Oct 16 10:58:25 2012
System State dumped to trace file
/app/oracle/diag/rdbms/provprr/PROVPRR2/trace/PROVPRR2_mrp0_10613.trc
Tue Oct 16 12:10:43 2012
Errors in file
/app/oracle/diag/rdbms/provprr/PROVPRR2/trace/PROVPRR2_mrp0_10613.trc:
Tue Oct 16 12:10:44 2012
Dumping diagnostic data in directory=[cdmp_20121016121044], requested by
(instance=2, osid=10613 (MRP0)), summary=[abnormal process termination].
Tue Oct 16 13:33:59 2012
Standby controlfile consistent with primary
RFS[8]: Selected log 23 for thread 3 sequence 71 dbid 32324255 branch
795712034
Tue Oct 16 13:34:12 2012
Archived Log entry 4808 added for thread 3 sequence 70 ID 0x302ac18 dest 2:
Tue Oct 16 15:04:02 2012
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
Tue Oct 16 15:05:19 2012
Managed Standby Recovery Canceled (PROVPRR2)
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
Tue Oct 16 15:05:20 2012
Recovery coordinator died, shutting down parallel recovery
<--------------- you must have killed it here
Tue Oct 16 15:05:29 2012
ORA-1013 signalled during: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
CANCEL...
Tue Oct 16 15:06:39 2012
Logmerger died, shutting down parallel recovery slaves
Recovery interrupted!
Recovery interrupted!
Tue Oct 16 15:06:43 2012
Reconfiguration started (old inc 1226, new inc 1228)
List of instances:
1 2 3 (myinst: 2)
Global Resource Directory frozen
Communication channels reestablished
Tue Oct 16 15:06:43 2012
* domain 0 valid = 1 according to instance 1
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Tue Oct 16 15:06:43 2012
LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Tue Oct 16 15:06:43 2012
LMS 1: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Tue Oct 16 15:06:43 2012
LMS 3: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Tue Oct 16 15:06:43 2012
LMS 2: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Submitted all GCS remote-cache requests
Fix write in gcs resources
Reconfiguration complete
Tue Oct 16 15:30:34 2012
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE THROUGH ALL SWITCHOVER
DISCONNECT USING CURRENT LOGFILE
Attempt to start background Managed Standby Recovery process (PROVPRR2)
Tue Oct 16 15:30:34 2012
....
after the MRP restart it is applying again.
Tue Oct 16 15:30:46 2012
Parallel Media Recovery started with 64 slaves
Waiting for all non-current ORLs to be archived...
All non-current ORLs have been archived.
Tue Oct 16 15:30:47 2012
Media Recovery Log
+ARCH_PROVPRD/provprr/archivelog/2012_10_11/thread_1_seq_32.4942.796401567
Media Recovery Log
+ARCH_PROVPRD/provprr/archivelog/2012_10_11/thread_2_seq_18.4934.796399717
Tue Oct 16 15:30:47 2012
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE THROUGH ALL
SWITCHOVER DISCONNECT USING CURRENT LOGFILE
Media Recovery Log
+ARCH_PROVPRD/provprr/archivelog/2012_10_11/thread_3_seq_28.4938.796401515
Thu Oct 11 14:09:05 2012
Media Recovery Waiting for thread 1 sequence 31
Thu Oct 11 14:09:12 2012
Archived Log entry 4722 added for thread 3 sequence 27 ID 0x302ac18 dest 2:
Thu Oct 11 14:11:33 2012
Recovery of Online Redo Log: Thread 1 Group 14 Seq 31 Reading mem 0
Mem# 0: +DID1_PROVPRD/provprr/standby02a.log
Media Recovery Log
+ARCH_PROVPRD/provprr/archivelog/2012_10_11/thread_2_seq_18.4934.796399717
Thu Oct 11 14:11:49 2012
Media Recovery Waiting for thread 3 sequence 28 (in transit)
Recovery of Online Redo Log: Thread 3 Group 23 Seq 28 Reading mem 0
Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby11a.log
Recovery of Online Redo Log: Thread 1 Group 13 Seq 32 Reading mem 0
Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby01a.log
Thu Oct 11 14:34:34 2012
RFS[5]: Possible network disconnect with primary database
Thu Oct 11 14:39:27 2012
RFS[7]: Assigned to RFS process 22717
RFS[7]: Opened log for thread 1 sequence 34 dbid 32324255 branch 795712034
Archived Log entry 4726 added for thread 1 sequence 34 rlc 795712034 ID
0x302ac18 dest 1:
Thu Oct 11 14:43:47 2012
Primary database is in MAXIMUM AVAILABILITY mode
Standby controlfile consistent with primary
Standby controlfile consistent with primary
RFS[8]: Assigned to RFS process 1735
RFS[8]: Selected log 23 for thread 3 sequence 31 dbid 32324255 branch
795712034
Standby controlfile consistent with primary
RFS[8]: Selected log 24 for thread 3 sequence 32 dbid 32324255 branch
795712034
Thu Oct 11 14:43:53 2012
Archived Log entry 4732 added for thread 3 sequence 31 ID 0x302ac18 dest 2:
Thu Oct 11 16:49:06 2012
Standby controlfile consistent with primary
RFS[8]: Selected log 23 for thread 3 sequence 33 dbid 32324255 branch
795712034
Thu Oct 11 16:49:18 2012
WORKAROUND?
===========
Yes
WORKAROUND INFORMATION
======================
Restart MRP. The interruption is not acceptable for HA.
TECHNICAL IMPACT
================
Manual intervention - no notification to the DBA so did not apply for hours
RELATED ISSUES (bugs, forums, RFAs)
===================================
Customer believed it was Bug 13470324 But
doesn't seem to apply
HOW OFTEN DOES THE ISSUE REPRODUCE AT CUSTOMER SITE?
====================================================
Intermittent
DOES THE ISSUE REPRODUCE INTERNALLY?
====================================
No
EXPLAIN WHY THE ISSUE WAS NOT TESTED INTERNALLY.
================================================
Complex setup
IS A TESTCASE AVAILABLE?
========================
No
@ DID THIS WORK PREVIOUSLY?
@ =========================
@ No
@
@ DID YOU TEST THIS ON THE LATEST RELEASE/PATCHSET?
@ =================================================
@ No
@
@ EXPLAIN WHY YOU DID NOT NOT TEST THIS ON THE LATEST RELEASE/PATCHSET.
@ =====================================================================
@ 11.2.0.2.5 only reproduced at customer site
@
@ STACK TRACE/FILE Information
@ ============================
@ alert logs, mrp trace with system state dump, opatch lsinventory -detail
@
Link to IPS Package:
====================
not available
@ P1 Defect?
@ ==========
@ No
@
*** JBARLOW 11/21/12 02:14 am *** (CHG: Sta->10)
*** JBARLOW 11/21/12 02:14 am ***
@ No evidence on bugsftp. Please upload.
*** CGREY 11/21/12 03:05 pm ***
@ Bug Attachment:alert_PROVPRD1.log.zip:alert log primary
@ Bug Attachment:alert_PROVPRD2.log.zip:alert log primary
@ Bug Attachment:alert_PROVPRD3.log.zip:alert log primary
@ Bug Attachment:alert_PROVPRD1.log.zip:standby
@ Bug Attachment:alert_PROVPRD2.log.zip:standby
@ Bug Attachment:alert_PROVPRD3.log.zip:standby
@ Bug Attachment:alert_PROVPRR1.log.zip:primary
@ Bug Attachment:alert_PROVPRR2.log.zip:primary
@ Bug Attachment:alert_PROVPRR3.log.zip:primary
@ Bug Attachment:Broker_Conf.txt:broker
@ Bug Attachment:drcPROVPRD1.zip:primary
@ Bug Attachment:drcPROVPRR1.zip:standby
@ Bug Attachment:lsinventory_Primary.txtpatch
@ Bug Attachment:lsinventory_Standby.txtpatch
*** CGREY 11/21/12 03:11 pm *** (CHG: Sta->16)
*** JBARLOW 11/22/12 06:48 am *** (CHG: Sta->10)
*** JBARLOW 11/22/12 06:48 am ***
@ Where is the MRP trace and systemstate? Please upload these to bugftp.
*** CGREY 11/26/12 09:59 am ***
@ Bug AttachmentROVPRR2_mrp0_10613.trc:state dump in mrp trace file
@ PROVPRR2_mrp0_10613.trc
*** CGREY 11/26/12 09:59 am ***
@ Sorry, attaching system state dump in mrp trace file PROVPRR2_mrp0_10613.trc
@ Thanks
*** CGREY 11/26/12 09:59 am *** (CHG: Sta->16)
@ *** JBARLOW 12/06/12 04:52 am *** (CHG: Support Contact->CGREY->JBARLOW)
*** JBARLOW 12/06/12 04:52 am *** (CHG: Asg->JBARLOW)
*** JBARLOW 12/07/12 03:41 am *** (CHG: Sta->10)
*** JBARLOW 12/07/12 03:41 am ***
@ alert_PROVPRR2.log :
@ .
@ Mon Oct 15 17:07:02 2012
@ ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
@ Mon Oct 15 18:00:50 2012
@ System State dumped to trace file
@ /app/oracle/diag/rdbms/provprr/PROVPRR2/trace/PROVPRR2_ora_23483.trc
@ Mon Oct 15 18:10:34 2012
@ System State dumped to trace file
@ /app/oracle/diag/rdbms/provprr/PROVPRR2/trace/PROVPRR2_mrp0_10613.trc
@ Tue Oct 16 10:21:32 2012
@ ALTER SYSTEM SET log_archive_trace=8192 SCOPE=BOTH SID='*';
@ Tue Oct 16 10:24:17 2012
@ ALTER SYSTEM SET log_archive_trace=0 SCOPE=BOTH SID='*';
@ Tue Oct 16 10:58:25 2012
@ System State dumped to trace file
@ /app/oracle/diag/rdbms/provprr/PROVPRR2/trace/PROVPRR2_mrp0_10613.trc
@ Tue Oct 16 12:10:43 2012
@ Errors in file
@ /app/oracle/diag/rdbms/provprr/PROVPRR2/trace/PROVPRR2_mrp0_10613.trc:
@ Tue Oct 16 12:10:44 2012
@ Dumping diagnostic data in directory=[cdmp_20121016121044], requested by
@ (instance=2, osid=10613 (MRP0)), summary=[abnormal process termination].
@ Tue Oct 16 13:33:59 2012
@ Standby controlfile consistent with primary
@ RFS[8]: Selected log 23 for thread 3 sequence 71 dbid 32324255 branch
@ 795712034
@ Tue Oct 16 13:34:12 2012
@ Archived Log entry 4808 added for thread 3 sequence 70 ID 0x302ac18 dest 2:
@ Tue Oct 16 15:04:02 2012
@ ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
@ Tue Oct 16 15:05:19 2012
@ Managed Standby Recovery Canceled (PROVPRR2)
@ Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL
@ .
@ From this we can see that 3 systemstates were dumped while the 'ALTER
@ DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL' was in progress.
@ These are in tracefiles PROVPRR2_ora_23483.trc and PROVPRR2_mrp0_10613.trc.
@ .
@ However, only PROVPRR2_mrp0_10613.trc has been uploaded to bugsftp. This file
@ is missing the first systemstate because the file was removed and recreated.
@ Furthermore, the single systemstate that we have was a level 2, so we don't
@ have the short stacks.
@ .
@ We do have the wait stacks in the systemstate. MRP (process 55) is in an idle
@ cycle of short waits for 'parallel recovery control message reply' and 'class
@ slave wait'. I would expect to see these waits in an idle MRP process. There
@ is no indication that this is the culprit.
@ .
@ Other sessions in a current long wait are VKTM (this is idle), process 112
@ and process 131.
@ .
@ Process 112:
@ .
@ OS pid 32273
@ .
@ Current Wait Stack:
@ 0: waiting for 'SQL*Net message from client'
@ driver id=0x54435000, #bytes=0x1, =0x0
@ wait_id=12197 seq_num=12247 snap_id=1
@ wait times: snap=1071 min 26 sec, exc=1071 min 26 sec, total=1071 min 26
@ sec
@ wait times: max=infinite, heur=1071 min 26 sec
@ .
@ This wait started on Oct 15 at 17:06:59. I think that this is a user session
@ and is in an idle wait.
@ .
@ Process 131:
@ .
@ OS pid 26919
@ .
@ This is RFS[3] (unless the OS process table has wrapped)
@ .
@ Current Wait Stack:
@ 0: waiting for 'SQL*Net message from client'
@ driver id=0x54435000, #bytes=0x1, =0x0
@ wait_id=247 seq_num=474 snap_id=1
@ wait times: snap=18295 min 41 sec, exc=18295 min 41 sec, total=18295 min
@ 41 sec
@ wait times: max=infinite, heur=18295 min 41 sec
@ .
@ This session has been in wait since the RFS process was created. This
@ indicates an idle RFS process.
@ .
@ There is not a great deal extra to be gleaned from the systemstate. All
@ recovery slaves are in idle wait states.
@ .
@ We really needed a series of level 266 systemstates (so we got the short
@ stacks) and/or level 3 hanganalyze dumps.
@ .
@ I would like to see what the other evidence dumped tells us so can you ask
@ the customer to upload the initial systemstate dump (PROVPRR2_ora_23483.trc)
@ and the contents of the diagnostic data dump directory cdmp_20121016121044.
@ .
@ Also has this problem occurred before or since?
*** CGREY 12/07/12 09:38 am ***
@ Asking customer for requested output.
@ Thanks
*** CGREY 12/12/12 01:08 pm ***
@ GTCRLK:3-6327164354ROVPRR2_ora_23483.trc.zip:Attachment uploaded From
@ Internal Portal
@ GTCRLK:3-6327164354:cdmp_20121016121044.zip:Attachment uploaded From Internal
@ Portal
*** CGREY 12/12/12 01:09 pm *** (CHG: Sta->16)
*** JBARLOW 12/14/12 12:49 am *** (CHG: Sta->10)
*** JBARLOW 12/14/12 12:49 am ***
@ The systemstate in PROVPRR2_ora_23483.trc is essentially the same as that in
@ PROVPRR2_mrp0_10613.trc i.e. the process which issued the command to shutdown
@ standby recovery is waiting for MRP to terminate (waiting on "MRP
@ inactivation"), but MRP just appears to be in an idle state.
@ .
@ The systemstates show that although PR00 is not in a wait state, it's last
@ wait ended 5973 min 45 sec ago i.e 4 days 3 hrs 33 mins. The systemstate was
@ taken at 2012-10-15 18:00:50, so PR00's last wait ended around Oct 11 14:27.
@ .
@ The standby alert logs shows that this is around the time of the last ORL
@ recovery:
@ .
@ Thu Oct 11 14:11:49 2012
@ Media Recovery Waiting for thread 3 sequence 28 (in transit)
@ Recovery of Online Redo Log: Thread 3 Group 23 Seq 28 Reading mem 0
@ Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby11a.log
@ Recovery of Online Redo Log: Thread 1 Group 13 Seq 32 Reading mem 0
@ Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby01a.log
@ Thu Oct 11 14:34:34 2012
@ RFS[5]: Possible network disconnect with primary database
@
@ The next recovered log was after managed recovery was restarted:
@ .
@ Tue Oct 16 16:06:51 2012
@ Media Recovery Waiting for thread 1 sequence 42 (in transit)
@ Recovery of Online Redo Log: Thread 1 Group 13 Seq 42 Reading mem 0
@ Mem# 0: +DATA_RAID1_PROVPRD/provprr/standby01a.log
@ .
@ So it appears that PR00 got itself stuck around Oct 11 14:27.
@ .
@ We would have required multiple stack traces from PR00 to help understand why
@ the process is stuck.
@ .
@ Has this problem happened before and/or since this occurrence? The current
@ evidence is insufficient to make any further progress in RCA.
*** CGREY 12/14/12 08:50 am ***
@ I updated the customer asking if they can reproduce it or if it happens often
@ enough. Can you give an action plan of what we need in case they can
@ reproduce it?
@ Thanks
*** CGREY 12/30/12 08:50 am ***
@ This is not reproducing and customer does not update.
@ Therefore putting in Auto Close Status.
@ Thanks
*** JBARLOW 02/05/13 05:50 am *** (CHG: Sta->31)
*** JBARLOW 02/05/13 05:50 am *** (ADD: Impact/Symptom->HANG - DATABASE INST...)
*** CGREY 02/05/13 07:50 am ***
@ SR is now closed
@ Thanks
*** JPINTO 05/06/13 10:21 pm *** (CHG: Sta->91)
|
|