Note: It does not work in moving it on another schema
impdp dav4635/Wolver1n323$ tables=PPM.EP_PPM_PROJECT_ALLOCATIONS remap_table=PPM.EP_PPM_PROJECT_ALLOCATIONS:MYTABLE1 dumpfile=16restore.dmp logfile=dx.log
Note: It does not work in moving it on another schema
impdp dav4635/Wolver1n323$ tables=PPM.EP_PPM_PROJECT_ALLOCATIONS remap_table=PPM.EP_PPM_PROJECT_ALLOCATIONS:MYTABLE1 dumpfile=16restore.dmp logfile=dx.log
EXEC msdb.dbo.sp_send_dbmail
@profile_name = 'CVTMSsQLT02 Mailer'
,@recipients = 'dexter.velasco@aei.com'
,@subject = 'Email from SQL Server'
,@body = 'This is my First Email sent from SQL Server :)'
,@importance ='HIGH'
GO
SELECT * FROM msdb.dbo.sysmail_unsentitems
SELECT * FROM msdb.dbo.sysmail_sentitems
SELECT * FROM msdb.dbo.sysmail_faileditems
EXEC msdb.dbo.sysmail_help_profile_sp;
select
err.[description] ,
fail.*
FROM [msdb].[dbo].[sysmail_event_log] err
inner join [msdb].dbo.sysmail_faileditems fail
On err.mailitem_id = fail.mailitem_id;
sysmail_help_profile_sp
SELECT COLUMN_NAME AS 'ColumnName'
,TABLE_NAME AS 'TableName'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME LIKE '%whatever%'
ORDER BY TableName
,ColumnName;
In this module, we will tackle the following:
1. Download a sample application from github.
2. Create a docker image of the sample application
3. Run the application from your laptop.
You will need a github account and desktop docker installed on your machine.
Steps
1. Go to https://github.com/nigelpoulton/gsd
Outline
C:\Users\Dexter.Velasco\Downloads\CloudAdoptionFramework-master\CloudAdoptionFramework-master\ready\AzNamingTool>dotnet restore --interactive
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application 'restore' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
Install version 7
check your Control Panel\Network and Internet\Network Connections goto properties uncheck ipv6
HKGDBAPPT01|epwebd:/dbapps/oracle/admin/epwebd/dpdump>impdp dav4635 tables=COMPASS.EC_CAR_AR_HISTORY directory=DATA_PUMP_DIR dumpfile=expdpWEBP-2903202317.dmp logfile=impdx.log table_exists_action=truncate
HKGDBAPPT01|epwebd:/dbapps/oracle/admin/epwebd/dpdump>impdp dav4635 tables=COMPASS.EC_CAR_AR_HISTORY,COMPASS.EC_CAR_ATTACHMENT,COMPASS.EC_CAR_PERSON,COMPASS.EC_CAR_PERSON_ROLES directory=DATA_PUMP_DIR dumpfile=expdpWEBP-2903202317.dmp logfile=impdx.log table_exists_action=truncate
Import: Release 18.0.0.0.0 - Production on Thu Mar 30 02:02:39 2023
Version 18.9.0.0.0
Copyright (c) 1982, 2018, Oracle and/or its affiliates. All rights reserved.
Password:
Connected to: Oracle Database 18c Standard Edition 2 Release 18.0.0.0.0 - Production
Master table "DAV4635"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "DAV4635"."SYS_IMPORT_TABLE_01": dav4635/******** tables=COMPASS.EC_CAR_AR_HISTORY,COMPASS.EC_CAR_ATTACHMENT,COMPASS.EC_CAR_PERSON,COMPASS.EC_CAR_PERSON_ROLES directory=DATA_PUMP_DIR dumpfile=expdpWEBP-2903202317.dmp logfile=impdx.log table_exists_action=truncate
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Table "COMPASS"."EC_CAR_AR_HISTORY" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Table "COMPASS"."EC_CAR_PERSON" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Table "COMPASS"."EC_CAR_ATTACHMENT" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Table "COMPASS"."EC_CAR_PERSON_ROLES" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
. . imported "COMPASS"."EC_CAR_AR_HISTORY" 10.12 MB 56971 rows
. . imported "COMPASS"."EC_CAR_PERSON_ROLES" 5.554 KB 4 rows
. . imported "COMPASS"."EC_CAR_ATTACHMENT" 1.063 MB 5657 rows
. . imported "COMPASS"."EC_CAR_PERSON" 9.679 KB 19 rows
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type DATABASE_EXPORT/STATISTICS/MARKER
Job "DAV4635"."SYS_IMPORT_TABLE_01" successfully completed at Thu Mar 30 02:02:54 2023 elapsed 0 00:00:07
HKGDBAPPT01|epwebd:/dbapps/oracle/admin/epwebd/dpdump>
Hello Arvin,
Good question.
Firstly, for access on the system we can talk with AE if
they have spare licenses for you.
Secondly, a good database design will always have Primary
Keys on all tables to protect data integrity and they inherently function
better than an index.
This makes Primary and Foreign keys perform really good when
your queries have joins.
Thirdly, looking at HKDOAMSP you will see the query below
that has the most waits. On this
instance it was 4473599451.
From here we now know its is caused by HKGDOAMSP.dbo.fn_GetEmployeewithNickname
and the major wait is MEMORY/CPU taking 26 mins of the 1 hour span.
Investigate the query used on the function. Check for
Quick Note: Primary and Foreign keys are designed for
business reasons. They should not created for performance gains.
Tuning Proper
Query 1:
select @result =trim(first_name) + ' ' + trim(last_name) + case when NickName is null then '' else ' ('+ NickName +')' end
from
doams_SAPFeed where SAPID = @empid
Upon investigation I saw that doams_SAPFeed has no PK. So I asked Lex which should be unique in this table.
(Hence the business reason part).
I have experienced that sometimes ETL loads from the source
are wrong or it was not properly transformed prior to loading. On this case I
don’t know which one.
This is where PK, FK, Constraints and Triggers comes in to
play during database architecture. The
database architecture is the foundation for the business process and then followed by the application/s
Moving on.
Take a baseline of the query on SSMS
Get the timing, io and the execution plan
(1 row affected)
Table 'doams_SAPFeed'. Scan count 1, logical reads 2683, physical
reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob
read-ahead reads 0.
(1 row affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time =
17 ms.
This tells us it is reading 2683 from memory ( which is what
we want, what we don’t want is to many physical reads from disk).
Also we a table scan but since the table is small it should
not be a big of a deal.
Let’s see if we can improve it.
Next is to have a hypothesis on how to reduce the overhead
of the query.
Hypothesis: adding a
PRIMARY KEY on sapid on doams_SAPFeed.
On
the HKDOAMSP environment we have to clean it first since SAPID '33905','34006','34038' are duplicates
So I created
my own tables dxfeed and dxfeed2 which has a PK.
Test the hypothesis
Now run
them side by side on SMS
select * from dxfeed where SAPID='36859';
select * from dxfeed2 where SAPID='36859' ;
Results
Table 'dxfeed'. Scan
count 1, logical reads 2161, physical reads 0, read-ahead reads 0, lob
logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 12 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed
time = 0 ms.
(1 row affected)
Table 'dxfeed2'. Scan count 0, logical reads 3, physical reads 0, read-ahead
reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed
time = 0 ms.
Execution
Plan
This exhibits
that Query 2, which has the PK has zero cost compared to Query 1.
Conclusion:
Adding a
primary key on SAPID on doams_SAPFeed is more efficient for the reasons
that
Recommendation:
This is a
very used function it runs
5:00 PM-6:00 PM (7,798 execs)
6:00 PM-7:00 PM (87,515 execs)
8:00 PM-9:00 PM (92,800 execs)
Run this on power shell
Ping.exe -t spiceworks.com | ForEach {"{0} - {1}" -f (Get-Date),$_}
--List Current Resumable Index
ALTER INDEX IX_ProdDate_v6 ON T_TDHEADER REBUILD
WITH (ONLINE = ON, RESUMABLE = ON, MAXDOP=8);
ALTER INDEX IX_ProdDate_v6 ON T_TDHEADER REBUILD
WITH (ONLINE = ON, RESUMABLE = ON);
SELECT
name as index_name,
percent_complete,
state_desc,
start_time,
last_pause_time,
total_execution_time AS ExecutionMin,
-- execution time times
-- ratio of percent to complete and precent completed
total_execution_time * (100.0-percent_complete)/percent_complete
AS ApproxExecutionMinLeft
FROM sys.index_resumable_operations;
ALTER INDEX IX_ProdDate_v6 ON T_TDHEADER PAUSE;
ALTER INDEX IX_ProdDate_v6 ON T_TDHEADER RESUME;
ALTER INDEX IX_ProdDate_v6 ON T_TDHEADER ABORT;
-- List Partition by DAV
SELECT distinct f.NAME AS file_group_name,
SCHEMA_NAME(t.schema_id) AS table_schema,
t.name AS table_name,
p.partition_number,
ISNULL(CAST(left_prv.value AS VARCHAR(MAX))+ CASE WHEN pf.boundary_value_on_right = 0 THEN ' < '
ELSE ' <= '
END , '-INF < ')
+ 'X' + ISNULL(CASE WHEN pf.boundary_value_on_right = 0 THEN ' <= '
ELSE ' < '
END + CAST(right_prv.value AS NVARCHAR(MAX)), ' < INF') AS range_desc,
p.rows AS NumberOfRows,
pf.boundary_value_on_right,
ps.name AS partition_schem_name,
pf.name AS partition_function_name,
left_prv.value AS left_boundary,
right_prv.value AS right_boundary
FROM sys.partitions p
JOIN sys.tables t
ON p.object_id = t.object_id
JOIN sys.indexes i
ON p.object_id = i.object_id
AND p.index_id = i.index_id
JOIN sys.allocation_units au
ON p.hobt_id = au.container_id
JOIN sys.filegroups f
ON au.data_space_id = f.data_space_id
LEFT JOIN sys.partition_schemes ps
ON ps.data_space_id = i.data_space_id
LEFT JOIN sys.partition_functions pf
ON ps.function_id = pf.function_id
LEFT JOIN sys.partition_range_values left_prv
ON left_prv.function_id = ps.function_id
AND left_prv.boundary_id + 1 = p.partition_number
LEFT JOIN sys.partition_range_values right_prv
ON right_prv.function_id = ps.function_id
AND right_prv.boundary_id = p.partition_number
where t.name in ('Partitioned_TShippment','STAGE_TShippment');
SELECT
name,
crdate
FROM
SYSOBJECTS
WHERE
xtype = 'U' and name like '%Shi%';
GO
select * from CommandLog where command like 'ALTER INDEX%';