Apply SharePoint Online template using PnP PowerShell

Posted on

In this article we will define a template, and apply it for other SPO sites using PnP library
Please follow these steps :

    1. Set-up PnP module latest version from here https://github.com/SharePoint/PnP-PowerShell/releases
    2. Go to your SPO tenant, and create a SPO Site, you can also define (List structure, Theme, Pages, Content Types) because all of those will be part of the template.
    3. Make sure that the app catalog is activated on the tenant (There is no relation between app catalog and the tenant, but there is a bug found in January PnP package that required App catalog to be activated or a failure occurred)
    4. After having the site ready, run the below PowerShell command:
      $Creds = Get-Credential
      $SiteURL = "https://test.sharepoint.com/sites/demo1/"
      Connect-PnPOnline -Url $SiteURL -Credentials $Creds
      Get-PnPProvisioningTemplate -Out C:\temp\demo1Template.xml
    5. You should see a progress running for a while to export all settings.
    6. Please open the XML file at the provisioned path (C:\temp\demo1Template.xml), and have a look, it is also doable to do medications through the file, by adding items, or fields, but you need to be careful not to crap the schema.
    7. Now it is time to import it to the destination site, Create a destination site on SPO with a default teamsite template
    8. Apply the new template by running the below command.
      $DestUrl = "https://test.sharepoint.com/sites/destination1/"
      Connect-PnPOnline -Url $DestUrl -Credentials $Creds
      $template = Load-PnPProvisioningTemplate -Path "C:\temp\demo1Template.xml"
      Apply-PnPProvisioningTemplate -InputInstance $template

After this, refresh the destination site, and you should see the template applied.
I suggest to start first with a simple template, then apply more complexity to it. this approach should run smoothly also with SP2016, but I haven’t tried to import a template from SPO and import it to 2016 or vise versa


“Access Denied” to Access Requests list or “Request approval failed” when you process a pending request in SharePoint Online

Posted on Updated on

I have run through a problem today during my work on one of the clients, and this sometimes happened mainly with sites that is migrated to SharePoint Online




This works perfectly fine !

Draw PowerBI Map from IP Addresses

Posted on

I had a request from a client, where he has an Excel sheet with list of IP addresses as well as other information, and he want to convert those IP address to the equivalent country and list in PowerBI.

Some Analytical tool like Splunk have this is a built in function, but for PowerBI you need to build it your self, and we will rely on a free web-service to implement this, so lets start

  1. Go to http://freegeoip.net/json/
  2. You should see the default data, which is your data retrieved by your IP address.
  3. Data displayed will be Country Code, Country Name, Region Code, Region Name, City, Zip Code(if applicable), Time Zone, Latitude, Longitude, and Metro Code
  4. if you entered specific IP address, correspondence data will be displayed, example: http://freegeoip.net/json/
  5. Now, lets get back to PowerBI, Open your , Click Get Data, and then select Blank Query
  6. Click View, then Advanced Query, and add the Query in the right text
  7. Copy paste this Query and enter it in the text box of the Advanced Query
    Source = (#"IP Address" as text) => let
    Source = Json.Document(Web.Contents("http://freegeoip.net/json/" & #"IP Address")),
    #"Converted to Table" = Record.ToTable(Source),
    #"Transposed Table" = Table.Transpose(#"Converted to Table"),
    #"Promoted Headers" = Table.PromoteHeaders(#"Transposed Table")
    #"Promoted Headers"
  8. Click Done, and rename the function with a meaningful name like : fn_GetRegion
  9. Now, you need to create a column with this function, to do so, from Edit Queries go to Add Column > Invoke Custom Function
  10. Enter the data as shown below
  11. The column will appear at the end of your table, now we need to Expand the column by click on the double arrow icons as in the snapshot below
  12. Now, all the columns is appeared clearly.
  13. and you can use it to draw your maps.

Thanks to guavaq who shows us the solution in this post

Let’s follow protocols: HTTP

Posted on Updated on

Some of us know a lot about what happen when we write URL in our browser and hit enter or when an app connect to a web service or API to get data or perform an operation, but let us discuss exactly what is going on between the browser or any client and the URL  or any server resource like HTML page, files, or web service … etc. As many knows HTTP sort for Hypertext Transfers Protocol so every thing happen is written as text on this dialog between the client and the server but what is exactly this message is and what is the server replay that what we will try to clarify.

First let us discuss the client which may be web browser, app, or even an web debugging tools like fiddler, Post Man, SOAP UI. which I think we need to talk about after HTTP.

Client Message or HTTP Request, in a happy world the request between client and server simply done by sending the request with a certain structure (we will discuss it in details) to the server and wait for the server response, but this request can go throw a long way by passing to proxy, gateway, and/or tunnel. In sort, A proxy is a forwarding agent, receiving requests for a URL in its absolute form, rewriting all or part of the message, and forwarding the reformatted request toward the server identified by the URL. A gateway is a receiving agent, acting as a layer above some other server(s) and, if necessary, translating the requests to the underlying server’s protocol. A tunnel acts as a relay point between two connections without changing the messages; tunnels are used when the communication needs to pass through an intermediary (such as a firewall) even when the intermediary cannot understand the contents of the messages.



HTTP Request message structure (Header and body):

1- Request Line contains [Request Method] + [Server Resource URL] + [HTTP Protocol version] + [Carriage return] + [Line feed]

ex: GET /pdf/book.pdf HTTP/1.1

2- Request header fields list, all header fields are optional except the host field. and the field structure is [Field Name] + [Colon] + [Field Value] + [Carriage return] + [Line feed]

ex: Host: tecgang.wordpress.com

3-Empty Line  [Carriage return] + [Line feed] in order to split between request header and request body

4-[Optional] Request body, which can be anything, binary file, text, JSON data or any data you want to send it to the server

HTTP Request methods are:

GET: The GET method requests a representation of the specified resource. for example to get an image or html file.

HEAD: The HEAD method asks for a response identical to that of a GET request, but without the response body. for example if you want to get a file but you want to know the file length first then you request a HEAD. (Useful for retrieving meta-information)

POST: Used to ask the server to accept the entity enclosed in the request as a new entity of the resource identified by the URL.

PUT: Used to ask the server to accept the entity enclosed in the request to store an entity of the resource identified by the URL. if it not exists its create new one or update existing one.

 DELETE: The DELETE method deletes the specified resource.

TRACE: The TRACE method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.

OPTIONS: The OPTIONS method returns the HTTP methods that the server supports for the specified URL.

CONNECT: The client asks an HTTP Proxy server to tunnel the TCP connection to the desired destination. The server then proceeds to make the connection on behalf of the client. Once the connection has been established by the server, the Proxy server continues to proxy the TCP stream to and from the client.

PATCH: The PATCH method update partial resources.



HTTP Response message structure (Header and body):

1- Status Code and message [HTTP Version] + [Status Code] + [Message] + [Carriage return] + [Line feed]

ex: HTTP/1.1 200 OK

2- Response header fields, all header fields are optional and the field structure is [Field Name] + [Colon] + [Field Value] + [Carriage return] + [Line feed]

ex: Content-Type: application/pdf

3- Empty Line  [Carriage return] + [Line feed] in order to split between response header and response body

4-[Optional] Request body, which can be anything, binary file, text, JSON data or any data send by the server



Now you can see whats happen if you type in your browser an url like this one https://tecgang.wordpress.com/index.php, the dialog will be:

The Request:

GET /index.php HTTP/1.1
Host: tecgang.wordpress.com

The Response:

HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Content-Encoding: UTF-8
Content-Length: 138

<title>An Example Page</title>
Hello World, this is a very simple HTML document.

Adding application insights to an existing Azure Application

Posted on

Adding application insights to an existing Azure Application

  1. Create a new application insights instance and associate to an existing app service
    1. Select New resource
    2. Look for “application insights”
    3. Give your new instance a name
      Give same name as app service to configure with application insights for ease of use

    4. Wait for the instance to be deployed successfully and browse to it
    5. Copy the instrumentation key (of iKey)
    6. Now browse to the related app service and open the application settings
    7. Add an application setting called ‘iKey’ and paste the instrumentation key copied previously
    8. Save and restart the service (although when you save app settings the app service restarts). Allow for a few minutes for data to be collected.
  2. Smoke tests
    1. Logs: It is also possible to check the logs for confirmation that the new settings were taken into account.
      1. In the Logging Framework logs: Ordering by descending EventDateTime, you should be able to find a couple of entries showing a restart of the application and that setting being applied
      2. Message should show:

        Note: The same should be added to the diagnostic logs should they be enabled
    2. Editor: Confirm the default value does not exist anymore
      1. Browse to the App Service Editor in your app service settings and click Go

      2. Select ApplicationInsights.config and scroll to the bottom. The default value for the instrumentation key is commented out.

  3. Other
    1. It is possible that if you created the application service with an associated application insights instance, it would give you the option to enable application insights.

      But this document assumes that application insights is added after the creation of the related application service

Export SharePoint Online Term Stores using PowerShell

Posted on Updated on

How to Export SharePoint Online Term Stores to CSV file, to allow you import it in any other environment ?

Unfortunately Microsoft Provides a simple out of the box way to import term stores using CSV template, but didn’t provide a way for exporting

Here below you will find a very simple Powershell Script that will allow you easy export your term stores so that you can clone it in any other tenant without hard efforts

Set-ExecutionPolicy -Scope CurrentUser Unrestricted
#Specify admin user and SharePoint site URL
#Update login
#Adding references to SharePoint client assemblies
$GroupName = "Add Here the Group Name"
$TermSetName = "Add Here the TermSet Name you want to export"
$siteUrl = "https://tenant.sharepoint.com"
$UserName = "Put Your UserName Here"
$Pwd = Read-Host -Prompt "Enter your password" -AsSecureString
#Recursive function to get terms
function GetTerms([Microsoft.SharePoint.Client.Taxonomy.Term] $Term,[String]$ParentTerm,[int] $Level)
$Terms = $Term.Terms;
$ParentTerm = $ParentTerm + "," + $Term.Name;
$ParentTerm = $Term.Name;
Foreach ($SubTerm in $Terms)
$Level = $Level + 1;
#up to 7 terms levels are written
$NumofCommas = 7 - $Level;
$commas ="";
For ($i=0; $i -lt $NumofCommas; $i++)
$commas = $commas + ",";
$file.Writeline("," + "," + "," + "," + $Term.Description + "," + $ParentTerm + "," + $SubTerm.Name + $commas );
GetTerms -Term $SubTerm -ParentTerm $ParentTerm -Level $Level;
$mycreds = New-Object System.Management.Automation.PSCredential ($UserName, $Pwd)
$Context = New-Object Microsoft.SharePoint.Client.ClientContext($siteUrl)
$credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($mycreds.UserName, $mycreds.Password)
$Context.Credentials = $credentials
#$Context = New-Object Microsoft.SharePoint.Client.ClientContext($Site)
#$Credentials = New-Object System.Net.NetworkCredential($User,$Pwd);
#$Credentials = New-Object Microsoft.SharePoint.Client.SharePointOnlineCredentials($User, $Pwd)
#$Context.Credentials = $Credentials
$MMS = [Microsoft.SharePoint.Client.Taxonomy.TaxonomySession]::GetTaxonomySession($Context)
#Get Term Stores
$TermStores = $MMS.TermStores
$TermStore = $TermStores[0]
#Get Groups
$Group = $TermStore.Groups.GetByName($GroupName)
#Bind to Term Set
$TermSet = $Group.TermSets.GetByName($TermSetName)
#Create the file and add headings
$OutputFile = "D:\Out.txt"
$file = New-Object System.IO.StreamWriter($OutputFile)
$file.Writeline("Term Set Name,Term Set Description,LCID,Available for Tagging,Term Description,Level 1 Term,Level 2 Term,Level 3 Term,Level 4 Term,Level 5 Term,Level 6 Term,Level 7 Term");
$Terms = $TermSet.Terms
$lineNum = 1;
Foreach ($Term in $Terms)
if($lineNum -eq 1)
##output term properties on first line only
$file.Writeline($TermSet.Name + "," + $TermSet.Description + "," + $TermStore.DefaultLanguage + "," + $TermSet.IsAvailableForTagging + "," + $Term.Description + "," + $Term.Name + "," + "," + "," + "," + "," + "," );
$file.Writeline("," + "," + "," + "," + $Term.Description + "," + $Term.Name + "," + "," + "," + "," + "," + "," );
$lineNum = $lineNum + 1;
$TermTreeLevel = 1;
GetTerms -Term $Term -Level $TermTreeLevel -ParentTerm "";

Get All Datbases in your SQL Server

Posted on Updated on

This SQL script is used to get all databases in your SQL Server with details as follow (Thanks to Tibor Karaszi):

  1. Database name
  2. Data allocated
  3. Dat used
  4. Log allocated
  5. Log used
  6. recovery model
  7. Instance name

To get please do the following:

1. Run the below in a new SQL query
USE [master]
/****** Object: StoredProcedure [dbo].[sp_dbinfo] Script Date: 06/21/2017 16:43:14 ******/
CREATE PROC [dbo].[sp_dbinfo]
@sort char(1) = 'n'
,@include_instance_name char(1) = 'n'
@sort accept 4 values: 'n' (default), 'd', 'l' and 'r'.
@include_server_name accept 2 values, 'y' and 'n'.
It specifies the sort order (name, data allocated, log allocated, rollup only).
Written by Tibor Karaszi 2009-12-29
Modified 2010-01-19, fixed data type for db name. Thanks csm!
Modified 2010-05-24, added support for offline databases. Thanks Per-Ivan N?und.
Modified 2011-07-21, SQL Server 11, use sysperfinfo instead of DBCC SQLPERF.
Modified 2011-09-23, master instead of MASTER, also qualified sysperfinfo.
Modified 2011-12-28, renamed to sp_dbinfo, added rollup option.
Modified 2013-02-19, added recovery model and option for instance name.
@sql nvarchar(2000)
,@db_name sysname
,@recovery_model varchar(12)
,@crlf char(2)
SET @crlf = CHAR(13) + CHAR(10)
--Create tables to hold space usage stats from commands
CREATE TABLE #logspace
database_name sysname NOT NULL
,log_size real NOT NULL
,log_percentage_used real NOT NULL
CREATE TABLE #dbcc_showfilestats
database_name sysname NULL
,file_id_ int NOT NULL
,file_group int NOT NULL
,total_extents bigint NOT NULL
,used_extents bigint NOT NULL
,name_ sysname NOT NULL
,file_name_ nvarchar(3000) NOT NULL
--Create table to hold final output
CREATE TABLE #final_output
database_name sysname
,data_allocated int
,data_used int
,log_allocated int
,log_used int
,is_sum bit
--Populate log space usage
INSERT INTO #logspace(database_name, log_size, log_percentage_used)
instance_name AS 'Database Name'
WHEN counter_name = 'Log File(s) Size (KB)' THEN cntr_value / 1024.
END) AS 'Log Size (MB)'
WHEN counter_name = 'Percent Log Used' THEN cntr_value
END) AS 'Log Space Used (%)'
FROM master..sysperfinfo
WHERE counter_name IN('Log File(s) Size (KB)', 'Percent Log Used')
AND instance_name != '_total'
GROUP BY instance_name
----Populate data space usage
DECLARE db CURSOR FOR SELECT name FROM sys.databases WHERE state_desc = 'ONLINE'
WHILE 1 = 1
INSERT INTO #dbcc_showfilestats(file_id_, file_group, total_extents, used_extents, name_, file_name_)
EXEC (@sql)
UPDATE #dbcc_showfilestats SET database_name = @db_name WHERE database_name IS NULL
--Result into final table
INSERT INTO #final_output(database_name, data_allocated, data_used, log_allocated, log_used, is_sum)
CASE WHEN d.database_name IS NOT NULL THEN d.database_name ELSE '[ALL]' END AS database_name
,ROUND(SUM(CAST((d.data_alloc * 64.00) / 1024 AS DECIMAL(18,2))), 0) AS data_allocated
,ROUND(SUM(CAST((d.data_used * 64.00) / 1024 AS DECIMAL(18,2))), 0) AS data_used
,ROUND(SUM(CAST(log_size AS numeric(18,2))), 0) AS log_allocated
,ROUND(SUM(CAST(log_percentage_used * 0.01 * log_size AS numeric(18,2))), 0) AS log_used
,GROUPING(d.database_name) AS is_sum
SELECT database_name, SUM(total_extents) AS data_alloc, SUM(used_extents) AS data_used
FROM #dbcc_showfilestats
GROUP BY database_name
) AS d
INNER JOIN #logspace AS l ON d.database_name = l.database_name
INNER JOIN sys.databases AS sd ON d.database_name = sd.name
GROUP BY d.database_name WITH ROLLUP
--Output result
SET @sql = '
SELECT f.database_name, f.data_allocated, f.data_used, f.log_allocated, f.log_used, d.recovery_model_desc' +
CASE @include_instance_name WHEN 'y' THEN ', @@SERVERNAME AS instance_name' ELSE '' END + @crlf +
'FROM #final_output AS f LEFT OUTER JOIN sys.databases AS d ON f.database_name = d.name' + @crlf +
CASE WHEN @sort = 'r' THEN 'WHERE f.database_name = ''[ALL]''' ELSE '' END + @crlf +
'ORDER BY is_sum' + @crlf +
WHEN @sort = 'n' THEN ', database_name'
WHEN @sort = 'd' THEN ', data_allocated DESC'
WHEN @sort = 'l' THEN ', log_allocated DESC'
--PRINT @sql

2. This will create a Procedure with the name dbo.sp_dbinfo.
To confirmed please Check Databases > System Datbases > master > Programmabiltiy > Stored Procedures > dbo.sp_dbinfo

3. Now run this command
USE [master]
EXEC sp_dbinfo 'd', 'y'

you should see result like the below

you can also pick one of those commands for different output
--Test execution
EXEC sp_dbinfo
EXEC sp_dbinfo 'n'
EXEC sp_dbinfo 'd'
EXEC sp_dbinfo 'l'
EXEC sp_dbinfo 'r'
EXEC sp_dbinfo 'n', 'y'
EXEC sp_dbinfo 'd', 'y'

Good Luck !