My Blog Posts
7 Best Practices to Design Cutting-Edge GraphQL Schemas
Aug 02, 20224 min readGraphQL
GraphQL is a relatively new technology brought to the world by Meta (formerly Facebook). Despite being new, it is getting more and more popular.
Almost all the big industries and many small organizations and startups use GraphQL. That’s why it is required knowledge for every web developer in today's age.
While working with GraphQL schemas, we must follow some best practices for designing cuttingedge schemas. Otherwise, as the project grows, there will be no smooth design, and the project will be a mess.
Today, we will discuss some best practices while designing GraphQL schemas. So, without furtherado, let’s jump into the main points.
1. Use deprecation messages properly.
While working on a team, there are many scenarios when the requirements change. Sometimes, we need to change the field name or remove some fields.
For example, suppose you are building a blogging website. You are designing a schema for fetching a blog. The schema initially looks like the following schema:
blog(id: $id) {
title
subtitle
imageUrl
}
Now, the requirement has changed, and you are told to include alt text for the images; now, you design the schema like the below:
blog(id: $id) {
title
subtitle
image {
src
alt
}
}
You now changed your API and published it to production.
But, Wait!
As soon as you publish your new changes to the production, your app will break because your frontend team has not implemented your changes yet! Even if you are implementing a public or partner API, it’s a BIG NO to push these changes directly!
On the contrary, keeping this field also doesn’t solve the problem because it is a redundant field. Your frontend team or end user might not know whether this field will be removed or not. You also don’t know for how long you will keep this field.
Keeping the field in this example might not be a big issue, but there might be lots of areas that should be removed at some point.
So what’s the Solution?
Instead of obliterating the field, deprecate the field with a reason and a date when you will remove this field. In this way, the frontend developers will have a chance to look into this issue, and they can change this field at some point.
type Blog {
title: String
subtitle: String
imageUrl: String @deprecated(reason: "This field is deprecated and will be removed on Aug 6, 2022. Use image field instead.")
image: {
src: String
alt: String
}
}
2. Explicitly name your variables.
It would help if you named your schema variables explicitly. The variable's name should be concise and return the output it promised. Also, your variable naming should not mansplain its purpose or type.
Here are some examples of naming variables while designing GraphQL schema:
BAD
imageUrlString
descriptionText
GOOD
imageUrl
likesCount
In this example, if you pass an image URL with your variable, you should name it imageURL. On the other hand, if you are giving an image object with width, height, src, and alt text, you should call that variable image.
Similar applies to the likesCount too. likesCount means that we are passing the number of likes in this variable, but if we picked likes as our naming, it would mean the list of users who liked the post.
So, name your variables explicitly.
3. Use aliases.
Using aliases can solve a lot of redundancy problems. Let’s check the following example for a better understanding.
Let’s say we need to return two different types of descriptions for our schema. One is the description, and the other is the descriptionHtml. We can add these two same kinds of fields using aliases.
{
description
descriptionHtml: description(format: "HTML")
}
This is so powerful that we can use this in many use cases like below:
{
descriptionEn: description(lang: "en")
descriptionEnHTML: description(format: "HTML", lang: "en")
descriptionFr: description(lang: "fr")
}
4. Make fields nullable.
It is another essential but easy practice you can consider while developing your every GraphQL project. It’s easy because GraphQL, by default, makes the fields nullable.
Making fields nullable is helpful because we might deprecate a field whenever the project requirement changes and stop assigning values to that variable.
This would throw an error if we didn’t make the field nullable. So, it is a great idea to keep the fields nullable unless you should make them nonnullable.
5. Use Relay Cursor Connection Specification while using pagination.
I have written an article explaining in GraphQL and how . You can read these two articles if you want.
6. Create a user entry point for the authenticated user.
Don’t allow the client applications to provide the user identity. Use authentication token to fetch user data. Also, create a user entry point to get the authenticated user's information.
BAD
user(email: $email) {
username
email
name
lastLoggedIn
}
GOOD
me {
username
email
name
lastLoggedIn
}
Also, remember, don’t ever save your authentication token in cookies or local storage. I wrote an article describing . You can read this article if you want.
7. Turn introspection off in production.
It is one of the most crucial things. Turn off the introspection of the GraphQL server while in production. You don’t want to expose your API design to the public unless you build the API for public apps.
That’s it. I hope this article will be helpful to you. Have a great day!
References
How to Integrate Google Analytics into Your Next.js Web App
Jul 29, 20222 min readNext JS
Today in this article, we will see how we can add Google Analytics to our Next.js app. So without further ado, let’s jump right into it.
Step 1: Creating a new Next.js app
First, let’s create a new Next.js app by running the command in the terminal:
sh
npx createnextapp demoapp
This step is not required if you already have an existing Next.js app and want to add Google Analytics.
Step 2: Create a new property in Google Analytics
Now, let’s go to the . Create your Google Analytics account if you haven’t already.
After creating the Google Analytics account, click on the “Admin” section in the bottom left corner and click “Create Property.” Give the property a name and fill out all the other staffs the form is asking for.
Step 3: Create DataStream in Google Analytics
Next, we need to set up our data stream for our Next.js app. Here, we can set up data streams for three types of platforms. As we add analytics to our Next.js app, we will choose “Web.”
Then we should give our website URL and a stream name. When you have done that, click on “Create Stream.”
Step 4: Add Google Analytics script to your Next.js app
After creating the stream, you get the web stream details page. Expand the “Global site tag (gtag.js),” and you should see some script code.
Now, add this script in app.tsx of your Next.js app.
We used Next.js’s Script component to render the script tag. We added strategy=”lazyOnload” in the Script component. We also put the Google Analytics Code in our environment variable (in the .env file), named NEXTPUBLICGOOGLEANALYTICSCODE. The rest are similar to what we copied from the Google Analytics website.
Step 5: Test if Google Analytics is correctly set up on your web app
We have completed setting up Google Analytics in our Next.js web application. Now we need to test whether Google Analytics is correctly set up.
Let’s connect to our web application and open the console from the browser. On the console, let’s write dataLayer and hit enter. If Google Analytics is correctly set up on our web app, we should see something similar to the bottom.
Conclusion
We have successfully integrated Google Analytics into our Next.js web application. We can now analyze our web visitors from the Google Analytics charts.
Beautify Your GitHub Profile Like a Pro
May 11, 20223 min readGitHub
Making a great GitHub portfolio is very helpful for any developer. It helps create a great impression, and it is one of the best ways to make your skills stand out.
This tutorial will make your boring GitHub profile into a super professionallooking portfolio. So without further ado, let’s get right into it.
Step 1: Create a New Public Repository
The first step is to create a new public Repository with the same name as your GitHub username. In my case, my GitHub username is ludehsar, so I am starting a new public repository named ludehsar.
Ensure the project is Public and “Add a README file” is checked.
After that, click on the “Create repository” button.
Step 2: Edit the README.md file
Now, after creating a new repository, click on the edit icon in your README.md file, and paste the following code:
Hi there 👋
I am Md Rashedul Alam Anik, currently working as a Software Engineer at the . I am a FullStack JavaScript developer and love writing clean and maintainable code. Find out more about me & feel free to connect with me here:
](https://www.linkedin.com/in/ludehsar/)
](https://rashedulalam.medium.com/)
](mailto:mdraanik12@gmail.com)
](https://www.facebook.com/rashedul.alam.anik.2/)
⚡ Technologies
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
!
Here, this profile README is divided into four parts. The first section is an overview, the second section is social icon badges, the third section is Technologies badges, and the last one is statistics.
Let’s first change the overview according to our personal preferences.
After changing the bio, let’s change the links and labels of the social icon badges. You can add all the social icons from .
Here is a sample social link badge:
](https://www.linkedin.com/in/ludehsar/)
Here, we should change the links according to our social URLs. Also, to change the label, you can rename the ludehsar label to your preferred name here.
After that, you can customize the skills badges according to your skills. You can find those badges in .
The final part is the statistics. You can generate your stats by changing the username to your username.
After all those changes, your profile should look something like this:
Step 3: Manage Your Pinned Repositories
You can add your pinned repository by clicking on the “Customize your pins” link. You can add up to six pinned repositories on your profile homepage.
You can also sort the pinned repositories by using the dragging icon.
Step 4: Organize your repositories
There are various steps when it comes to organizing your repositories.
First of all, your repositories need to have a nice short description in the about section. To add a description in your about section, click on the cog button beside the About section like below:
It’s also nice to include the website URL and the topics associated with this repository.
You can also manage including different components in your repository by checking or unchecking the bottom parts in the about settings.
Next, you can add a nice README to your project. To add a beautiful readme that catches the eye, you can follow my below tutorial:
You can also add other markdown documentation, such as CONTRIBUTING.md, CODEOFCONDUCT.md, SECURITY.md etc. You can generate all those markdown files from the settings of the repositories. These are nice to have as they tell other developers how they can contribute to your project.
Step 5: Contribute to Open Source Project
The final tip to make your GitHub profile look good is contributing to opensource projects daily. It will help grow your green mark in your contribution heat map.
Even if you cannot contribute to the opensource project, you can work on your project. This will increase the green contribution list in your heat map.
Conclusion
That’s pretty much it. If you follow these simple and easy steps, your GitHub profile will look super professional.
Making your GitHub profile look professional will help you create a better impression as a developer. It will boost your chances of getting any programming jobs.
Moreover, it will build your programming profile personality and even can help you reach more clients as a software developer.
How to Write a Stunning Readme for Your Projects
Nov 26, 20214 min readGitHub
Readme is a detailed overview of your project, discussing what your project is, how to install it or configure it, how to use it, who the authors are, and all the other related stuff.
It is the first impression of your project. Whenever someone clicks on your repository, he will first check the readme to get an idea of what the project is all about. Detailed readmes always create a good impression of the project.
Now, in this article, we will see how to create an awesome and detailed readme in all your projects. So, without a furtherado, let’s get right into it:
Why do you need to write a readme?
Readmes help people understand what your project is about.
Readme is a must if you are developing an opensource project. Even if you are not doing open source, you should write a readme so that other coworkers of this project can understand the project well.
Also, writing great readmes can help you land a better job. So, it is always useful to write a better readme for each of your projects.
TL;DR
Readmes are usually written in Markdown languages. So, a basic knowledge of markdown is needed to write the readme.
In this article, we will also learn about some basic Markdown syntaxes that will be needed to write a better readme or any markdown documents:
1. First of all, markdown languages also support markup languages. So, if you have a little bit of knowledge about HTML, you can use the HTML syntaxes too.
2. You can write the heading using the notation. The number of hashtags represents the level of the header. You can use up to 6 header levels. For example,
This is Heading level 1
This is Heading level 2
This is Heading level 3
3. Adding an extra line break in the paragraph separates the paragraph. For example,
This is the first paragraph.
This is the second paragraph.
4. You can bold a text by wrapping them using. For example, some text will result in some text.
5. For italicizing texts, use only one to wrap the texts. For example, some text will result in some text.
6. The following example demonstrates how to add an ordered list:
1. This is ordered list element 1
2. This is ordered list element 2
3. This is ordered list element 3
You also can create an unordered list like below:
This is unordered list element 1
This is unordered list element 2
This is unordered list element 3
7. You can add images in the readme by using the following code:
!
8. You can add links using the following code:
9. You can show the code samples in the readme using the following notation:
This is an inline code
This is a code block
That’s pretty much all you need to know to write the readmes, or contributions, or any other markdown documents for your project.
Now, as we know some basic markdown syntaxes, let’s deep dive into how we can create a beautiful readme for your projects.
Stepbystep process on how to write a better readme
The good thing is, we do not need to create a readme file from scratch. There are so many readmes out there in GitHub, that will help you create your readme easily.
is a great resource for finding the best readme designs specific to the projects.
I generally use the in almost all my projects, because it has many functionalities that are useful in demonstrating the projects. So, in this article, let’s create a readme using this template.
Step 1: Creating a GitHub repository
Let’s first create a GitHub repository by clicking on the plus button in the top right corner. Give the project title, description, and check on the “Add a README file”. After that, click on the “Create Repository” button.
Step 2: Copying the Readme content in your repository’s readme
Now, go to the repository, and click on the README.md file. Then click on the “Raw” button.
After clicking on the raw button, copy all the texts displayed in the browser. Then paste them into your projects’ readme file.
To paste in your readme, click on the “Edit” icon button in your README.md file:
Step 3: Changing the README file according to the project details
Now, it’s time to change the README.md file according to the project description. Let’s start by changing the project title and overview of the project.
Like this, you should change the remaining sections according to your project. You can add or remove some sections according to your project.
You should also focus on the links and images, and change them accordingly.
To preview the changes, just click on the preview button.
Another thing to notice is that you can add customized shields that will represent the states of your repositories or link to your LinkedIn profile.
In your readme templates, you can find some shields like below:
Now, if you scroll at the bottom, you can customize those shields according to your project:
As stated in the above picture, change your GitHub username and repository URL accordingly to all the links. Then you will see the magic in the preview.
If you want to use other shields in your projects or just want to explore them, feel free to visit them at .
Step 4: Saving and committing those changes
After you make changes to your readme file, scroll down to the “Commit changes” section, write a commit message and click on the “Commit changes” button.
WooHoo! You now have created a stunninglooking Readme in your GitHub repository.
Final Thoughts
In this article, we have seen how we can create a beautiful, detailed, and organized readme in our repositories.
Wellorganized readmes create the best impression about the developer. It proves that the project is organized, documented, and well maintained.
Anyway, thanks for reading my article. Have a nice day!
Resources
Easily Integrate ClamAV Antivirus Software in Your Node.js Application
Oct 10, 20219 min readNode.js
Security is an important keyword when it comes to the internet and web applications. We always try hard to keep our system safe and secure from cyberattacks. Compromising security can cause us to expose our valuable information and thus can ruin our business.
Most of the security attacks are done by uploading malicious files to the website. Many websites allow users to upload files such as profile photos, resumes, and so on. Hackers may use this opportunity to upload vulnerable files to the website and gain unauthorized access.
So, if our web application allows users to upload files, then we need to provide some level of security to scan the vulnerabilities in the files. We should install antivirus software on our server and scan the uploaded files.
is a great option for scanning files in web applications. Because of its versatility, it supports multiple file formats and signature languages. There is also a great Node.js package called clamscan, which provides an API for scanning files from the backend application.
In this article, we will explore how we can use clamscan to scan files for detecting viruses and malicious content in the server. So, without a furtherado, let’s get right into it.
Installing and Configuring ClamAV in Ubuntu
At first, we need to install ClamAV on our server or local computer. Installing ClamAV on a Linux machine is pretty easy:
For installing on Fedorabased distros,
sudo yum install clamav
For installing on Debianbased distros,
sudo aptget install clamav clamavdaemon
For OS X,
sudo brew install clamav
After installing ClamAV on our Linux machine, we need to configure some settings. For computers running Ubuntu as their operating system, execute the following command in the terminal to start the clamavfreshclam.
sudo service clamavfreshclam restart
sudo service clamavfreshclam status
After that, we should start the clamavdaemon service. This process is very important because it will create a socket file in /var/run/clamav/clamd.ctl which will be used to scan the file stream uploaded to the server.
Let’s run the following command in the terminal to start the clamavdaemon:
sudo service clamavdaemon start
sudo service clamavdaemon status
After installing and configuring ClamAV in our Linux machine, we are ready to proceed further to implement scanning files in our Node.js application.
Setting Up the Project
At first, let’s create a new project and set up some basic configurations.
To create a new Node.js application from scratch, we need to type the following command in the terminal:
npm init y
It will create a package.json file with some information. Then we should install the required packages along with the express js by running the following command in the terminal:
npm install express
This command will install express js and all other dependencies required for running the Node.js application. Next, let’s create a file called server.js in the root directory of the project. Here, we will paste the following code blocks.
ts
const express = require('express')
const app = express()
const port = 3000
app.listen(port, () = {
console.log(Server is listening on port ${port})
})
To run the server, let’s add the following code in the package.json.
{
// ...
"scripts": {
"start": "node server.js",
// ...
},
// ...
}
Now, if we run npm start in the terminal, we should see a message “Server is listening on port 3000”. That means our application has been successfully configured.
Installing and Configuring Clamscan
Now, as we have initialized our project, and implemented some basic configurations, it’s time for us to install and configure clamscan to the project.
To install clamscan, we should write the following command in the terminal:
npm install clamscan
It will install all the necessary dependencies and APIs required for using ClamAV for scanning and discovering malicious files.
Now, we need to change our code in the server.js like below to configure clamscan.
ts
const express = require('express')
const app = express()
const port = 3000
const clamscanConfig = {
removeInfected: true, // If true, removes infected files
quarantineInfected: false, // False: Don't quarantine, Path: Moves files to this place.
scanLog: null, // Path to a writeable log file to write scan results into
debugMode: true, // Whether or not to log info/debug/error msgs to the console
fileList: null, // path to file containing list of files to scan (for scanFiles method)
scanRecursively: true, // If true, deep scan folders recursively
clamscan: {
path: '/usr/bin/clamscan', // Path to clamscan binary on your server
db: null, // Path to a custom virus definition database
scanArchives: true, // If true, scan archives (ex. zip, rar, tar, dmg, iso, etc...)
active: true // If true, this module will consider using the clamscan binary
},
clamdscan: {
socket: '/var/run/clamav/clamd.ctl', // Socket file for connecting via TCP
host: '127.0.0.1', // IP of host to connect to TCP interface
port: 3310, // Port of host to use when connecting via TCP interface
timeout: 120000, // Timeout for scanning files
localFallback: false, // Do no fail over to binarymethod of scanning
path: '/usr/bin/clamdscan', // Path to the clamdscan binary on your server
configFile: null, // Specify config file if it's in an unusual place
multiscan: true, // Scan using all available cores! Yay!
reloadDb: false, // If true, will reload the DB on every call (slow)
active: true, // If true, this module will consider using the clamdscan binary
bypassRest: false, // Check to see if socket is available when applicable
},
preference: 'clamdscan' // If clamdscan is found and active, it will be used by default
}
const NodeClam = require('clamscan')
const ClamScan = new NodeClam().init(clamscanConfig)
app.listen(port, () = {
console.log(Server is listening on port ${port})
})
Scanning Files
Now, we can scan the files in our server by using the following code block:
ts
// ...
// Get instance by resolving ClamScan promise object
ClamScan.then(async clamscan = {
try {
const {isInfected, file, viruses} = await clamscan.isInfected('/some/file.zip')
if (isInfected) console.log(${file} is infected with ${viruses}!)
else console.log('File is harmless')
} catch (err) {
console.log('Error:', err.message)
}
}).catch(err = {
// Handle errors that may have occurred during initialization
console.log('Initialization Error:', err.message)
})
// ...
We can also modify our code to scan the files uploaded through the API. Thus we can easily detect malicious content before even saving the files to the storage.
But before going into that, we need to install a couple more packages: cors and expressfileupload. We need these packages to implement uploading files to the server. We can install these two packages by running the following command in the terminal:
npm install cors expressfileupload
Now, let’s create a new file called config.js and move the clamscanConfig to the config.js. Let’s also add fileUploadConfig in the config.js, which will be the configuration options for expressfileupload.
ts
const clamscanConfig = {
removeInfected: true, // If true, removes infected files
quarantineInfected: false, // False: Don't quarantine, Path: Moves files to this place.
scanLog: null, // Path to a writeable log file to write scan results into
debugMode: true, // Whether or not to log info/debug/error msgs to the console
fileList: null, // path to file containing list of files to scan (for scanFiles method)
scanRecursively: true, // If true, deep scan folders recursively
clamscan: {
path: '/usr/bin/clamscan', // Path to clamscan binary on your server
db: null, // Path to a custom virus definition database
scanArchives: true, // If true, scan archives (ex. zip, rar, tar, dmg, iso, etc...)
active: true // If true, this module will consider using the clamscan binary
},
clamdscan: {
socket: '/var/run/clamav/clamd.ctl', // Socket file for connecting via TCP
host: '127.0.0.1', // IP of host to connect to TCP interface
port: 3310, // Port of host to use when connecting via TCP interface
timeout: 120000, // Timeout for scanning files
localFallback: false, // Do no fail over to binarymethod of scanning
path: '/usr/bin/clamdscan', // Path to the clamdscan binary on your server
configFile: null, // Specify config file if it's in an unusual place
multiscan: true, // Scan using all available cores! Yay!
reloadDb: false, // If true, will reload the DB on every call (slow)
active: true, // If true, this module will consider using the clamdscan binary
bypassRest: false, // Check to see if socket is available when applicable
},
preference: 'clamdscan' // If clamdscan is found and active, it will be used by default
}
const fileUploadConfig = {
useTempFiles: false,
limits: {
fileSize: 26214400,
},
limitHandler: (req, res) = {
res.writeHead(413, {
Connection: 'close',
'ContentType': 'application/json',
})
res.end(
JSON.stringify({
success: false,
data: {
error: File size limit exceeded. Max size of uploaded file is: ${
26214400 / 1024
} KB,
},
})
)
},
}
module.exports = {
clamscanConfig,
fileUploadConfig
}
Now, let’s add the following code blocks to the server.js file:
ts
// ...
const cors = require('cors')
const fileUpload = require('expressfileupload')
// ...
const config = require('./config')
// ...
// CORS Middleware
app.use(cors())
// Middleware for attaching clamscan with the express request
app.use(async (req, , next) = {
req.clamscan = await new NodeClam().init({ ...config.clamscanConfig })
next()
})
// Middleware for attaching files to req.files
app.use(fileUpload({ ...config.fileUploadConfig }))
// ...
Here, we will use req.clamscan in each router, where we need to upload a file or multiple files.
After changing the server.js this far, the complete server.js will be something like the code below:
ts
const express = require('express')
const cors = require('cors')
const fileUpload = require('expressfileupload')
const NodeClam = require('clamscan')
const config = require('./config')
const app = express()
const port = 3000
// CORS Middleware
app.use(cors())
// Middleware for attaching clamscan with the express request
app.use(async (req, , next) = {
req.clamscan = await new NodeClam().init({ ...config.clamscanConfig })
next()
})
// Middleware for attaching files to req.files
app.use(fileUpload({ ...config.fileUploadConfig }))
app.listen(port, () = {
console.log(Server is listening on port ${port})
})
Implementing the Upload API
So far, we have configured so many things! Now, it’s time to create an upload route, where the files will be uploaded.
Let’s first create a simple POST API for uploading files:
ts
// ...
// POST: /avatarupload route
app.post('/avatarupload', (req, res) = {
if (!req.files || !req.files.avatar) {
res.status(409).json({
message: 'No file uploaded!'
})
}
const avatar = req.files.avatar
avatar.mv('./uploads/' avatar.name)
res.status(200).json({
message: 'File successfully uploaded!'
})
})
// ...
Here, we have implemented a simple POST API for uploading avatar images. We can test this API by uploading a file using or any other platform. Before that, we need to create a folder called upload in the root directory of the project to prevent the error of not finding the directory.
If the files are successfully uploaded, let’s go to the next phase where we will scan the files before storing them.
Let’s change the above code to scan the files before saving them on the server:
ts
// ...
const Readable = require('stream').Readable
// ...
const scanFile = async (file, clamscan) = {
const fileStream = Readable()
fileStream.push(file.data)
fileStream.push(null)
const result = await clamscan.scanStream(fileStream)
return {
filename: file.name,
isinfected: result.isInfected,
viruses: result.viruses,
}
}
// POST: /avatarupload route
app.post('/avatarupload', async (req, res) = {
if (!req.files || !req.files.avatar) {
return res.status(409).json({
message: 'No file uploaded!'
})
}
const avatar = req.files.avatar
const scanResult = await scanFile(avatar, req.clamscan)
console.log(scanResult)
if (!scanResult.isinfected) {
avatar.mv('./uploads/' avatar.name)
return res.status(200).json({
message: 'File successfully uploaded!'
})
}
return res.status(502).json({
message: 'Malicious file found!'
})
})
// ...
Result
Now it’s time to test the API. Let’s upload a malicious file and see what the output from the server is.
We can see that the antivirus software has successfully detected the malicious file, and it stopped the web application from storing the file on the server. We can also warn the users by notifying them about the virus.
Summary
So far, we have installed ClamAV on our Linux Machine and created a Node.js application where we can scan the uploaded file and detect if the file is infected or not. We have also tested our API by uploading a malicious file.
We can now use this knowledge and implement scanning malicious files in all our backend applications. It will give our users better reliability and confidence in using our application.
Also, we can prevent our websites from harmful attacks and security breaches. Although we cannot be 100% safe from the attackers, it will keep us somewhat ahead of them and keep our resources safe and secure.
Resources
Prevent Hackers from Posting Malicious Links in Your Web Applications
Jul 02, 20214 min readProgramming
Mitigating web vulnerabilities and ensuring high security is the topmost priority of a web developer. When deploying a website for a new business or a SaaS application, we need to ensure that our system is not vulnerable to any potential risks.
The number 1 rule while working on web security is never to trust the users. Sometimes hackers post malicious links to the website to hack the users who click the links. Sometimes the hackers post phishing links to get unauthorized access to the users’ sensitive information. As web developers, it is our primary responsibility not to allow the users to post malicious links in our web application.
In this article, we will be discussing how we can detect malicious links in our web applications. We will be using Google Safe Browser API to detect malicious links. So without a furtherado, let’s get started!
What Is Google Safe Browsing?
According to the ,
Safe Browsing is a Google service that lets client applications check URLs against Google’s constantly updated lists of unsafe web resources. Examples of unsafe web resources are social engineering sites (phishing and deceptive sites) and sites that host malware or unwanted software.
To keep things simple, Google keeps a record of unsafe web resources, and when we search a particular link, google searches the link against this record. By using the Google Safe Browsing API, we can detect the threat type, target platform, and all other necessary information.
Let’s Get into The Implementation
Let’s create a simple HTML website to input the links and tell the users whether it is a malicious link. If the link the user entered is malicious, we will describe the user the type of threat and the target platform. We will be using JQuery for better convenience.
html
<!DOCTYPE html
<html lang="en"
<head
<meta charset="UTF8"
<meta httpequiv="XUACompatible" content="IE=edge"
<meta name="viewport" content="width=devicewidth, initialscale=1.0"
<titleMalicious Link Detector</title
<style
.redtext {
color: red;
display: none;
}
.greentext {
color: green;
display: none;
}
.details {
display: none;
}
</style
</head
<body
<h1Malicious Link Detector</h1
<div
<input type="url" name="url" id="urlinput" placeholder="Enter link url" /
<button type="submit" id="submitbtn"Check</button
</div
<div
<h3 class="redtext"Link is malicious</h3
<div class="details"
<pThreat Type: <span class="threatType"</span</p
<pTarget Platform: <span class="targetPlatform"</span</p
</div
<h3 class="greentext"Link is not malicious</h3
</div
<script src="https://code.jquery.com/jquery3.6.0.min.js"</script
<script
$('document').ready(function () {
$('submitbtn').click(function () {
const url = $('urlinput').val()
console.log(url)
})
})
</script
</body
</html
Here, we have created a basic HTML web script where the users will input the link and determine whether the link they entered is malicious or not.
Now, we need to detect whether the link is safe or malicious. For this, we need to call the Google Safe Browsing API.
Getting An API Key
To call the Google Safe Browsing API, we first need to have an API key. To get an API key is actually simple.
First, go to the . Log in with your Google Account, and then create a new project, if you don’t have any. Then go to the API & Services Library from the left panel, and search for “Safe Browsing API” to activate the Safe Browsing API. After getting the Safe Browsing API from the search result, click on it and you should get an “Enable” button. Click on that “Enable” button to activate the Safe Browsing API.
Next, click on the API & Services Credentials from the left panel, and click on the “Create Credentials”. Select “API key” from the dropdown and copy the API key, and store it somewhere safe.
Calling The API
Now, as we have got the API key, it’s time for calling the API. Let’s change the HTML code like the following code to call the API to detect malicious links.
html
<!DOCTYPE html
<html lang="en"
<head
<meta charset="UTF8"
<meta httpequiv="XUACompatible" content="IE=edge"
<meta name="viewport" content="width=devicewidth, initialscale=1.0"
<titleMalicious Link Detector</title
<style
.redtext {
color: red;
display: none;
}
.greentext {
color: green;
display: none;
}
.details {
display: none;
}
</style
</head
<body
<h1Malicious Link Detector</h1
<div
<input type="url" name="url" id="urlinput" placeholder="Enter link url" /
<button type="submit" id="submitbtn"Check</button
</div
<div
<h3 class="redtext"Link is malicious</h3
<div class="details"
<pThreat Type: <span class="threatType"</span</p
<pTarget Platform: <span class="targetPlatform"</span</p
</div
<h3 class="greentext"Link is not malicious</h3
</div
<script src="https://code.jquery.com/jquery3.6.0.min.js"</script
<script
$('document').ready(function () {
$('submitbtn').click(function () {
const url = $('urlinput').val()
// Payload we will be passing in the API
const payload = {
client: {
clientId: 'maliciouslinksdetector', // Can be any name
clientVersion: '1.0.0', // Can be any version
},
threatInfo: {
threatTypes: [
'MALWARE',
'SOCIALENGINEERING',
'UNWANTEDSOFTWARE',
'POTENTIALLYHARMFULAPPLICATION',
'THREATTYPEUNSPECIFIED',
], // These are the possible threat types we are looking for
platformTypes: ['ANYPLATFORM'], // We will detect links targeting any platforms
threatEntryTypes: ['URL'], // The threat entry type is URL
threatEntries: [{ url: url }], // We can pass multiple urls if we want
},
}
// Calling the Safe Browsing API
// Replace the {key} with your Googe API Key
$.ajax({
url: 'https://safebrowsing.googleapis.com/v4/threatMatches:find?key={key}',
type: 'post',
data: JSON.stringify(payload),
contentType: 'application/json',
dataType: 'json',
success: function(data) {
// If threat found
if (data.matches) {
$('.greentext').hide()
$('.redtext').show()
$('.details').show()
$('.threatType').text(data.matches[0].threatType)
$('.targetPlatform').text(data.matches[0].platformType)
} else {
$('.greentext').show()
$('.redtext').hide()
$('.details').hide()
}
},
error: function (err) {
console.log(err)
}
})
})
})
</script
</body
</html
If we test this code in the browser, we can see the results for both malicious and genuine links.
Pretty impressive right!
You can find a list of malicious links on the website. You can test with other malicious links if you want. Hopefully, it can detect most of the malicious links currently available on the internet.
Conclusion
We now will be able to detect malicious links on our website and prevent our users from clicking them.
Although Google Safe Browsing API is for noncommercial use only, we can use the for commercial purposes.
If you like my article, here is another article about the ultimate way to store authentication tokens in JavaScript.
Have a nice day!
How to Scrape Web Applications in Node.js using Cheerio
Jun 24, 20214 min readNode.js
Scraping web applications is one of the most fun subjects for me and maybe for you too. Aside from fun, it is one of the most prime topics in data science.
Many of us may know how to scrape web data using Python or using some online tool. This article, however, will demonstrate how we can scrape data from static websites using Node.js. We will scrape data from website and show the data in API.
Creating a New Node.js Project
At first, let’s create a new Node.js project. To create a new project, open a new terminal in the working directory, and type the following command:
mkdir myscraper && cd ./myscraper
It will create a new folder named myscraper. To initiate a new Node.js project, type the following command in the terminal in the myscraper directory.
npm init y
It will create the file named package.json inside our project directory. Let’s install all the required dependencies as well as express by typing the following command:
npm install express
Set up Basic Code
Now let’s create a file named index.js in the root folder of our project directory. Inside the index.js file, let’s add the basic code below:
js
const express = require('express')
const app = express()
const PORT = 8080
app.use(express.json())
app.listen(PORT, () = console.log(🚀 Server started at ${PORT}.))
Now, if we enter the following command in the terminal, our server will start:
node .
We can see our server running by going to the URL . We can see a web page similar to the following:
Installing Cheerio and Axios
is a Node framework, which can be used to scrape web data using Node.js. Let’s install it first. To install Cheerio, you have to put the following command in the terminal:
npm install cheerio
It will install Cheerio in our project.
Let’s also install axios for fetching the HTML code.
npm install axios
Let’s Start Scraping
So far, we have initiated our Node.js project and installed all the required dependencies. Now, we will be starting our journey to scrape data from the website.
First, let’s fetch the HTML code for scraping the data. We will first download the homepage of our target website. Change the index.js code like below:
js
const axios = require('axios')
const cheerio = require('cheerio')
const express = require('express')
const app = express()
const PORT = 8080
app.use(express.json())
app.use('/', async (req, res) = {
try {
const data = await axios.get('https://webscraper.io/testsites/ecommerce/allinone')
if (data.status !== 200) {
return res.status(data.status).send({
message: 'Invalid url'
})
}
const html = await data.data
const $ = cheerio.load(html)
return res.status(200).send({
message: 'Everything is okay'
})
} catch(err) {
console.log(err.message)
}
})
app.listen(PORT, () = console.log(🚀 Server started at ${PORT}.))
Now, let’s fetch all the product lists from the “Top items being scraped right now” section. Hit CTRLU and observe the HTML code structure. Or you can inspect the code by hitting CTRLSHIFTI on your keyboard.
By observing the HTML code, we can see that the cards in the HTML are located in the following step:
div[class="wrapper"] div[class="container testsite"] div[class="row"] div[class="colmd9"] div[class="row"] div[class="colsm4 collg4 colmd4"] div[class="thumbnail"]
By observing the cards, we can see that each card has an image, a title, a price, a description, a rating, and the total number of reviews. So let’s fetch these records first:
js
const axios = require('axios')
const cheerio = require('cheerio')
const express = require('express')
const app = express()
const PORT = 8080
app.use(express.json())
app.use('/', async (req, res) = {
try {
const data = await axios.get('https://webscraper.io/testsites/ecommerce/allinone')
if (data.status !== 200) {
return res.status(data.status).send({
message: 'Invalid url'
})
}
const html = await data.data
const $ = cheerio.load(html)
const result = Array.from($('div[class="wrapper"] div[class="container testsite"] div[class="row"] div[class="colmd9"] div[class="row"] div[class="colsm4 collg4 colmd4"] div[class="thumbnail"]')).map((element) = ({
imageUrl: 'https://webscraper.io' $(element).find('img').attr('src').trim(),
title: $(element).find('div[class="caption"] h4 a').attr('title').trim(),
price: $(element).find('div[class="caption"] h4[class="pullright price"]').text().trim(),
description: $(element).find('div[class="caption"] p[class="description"]').text().trim(),
reviewcount: parseInt($(element).find('div[class="ratings"] p[class="pullright"]').text().trim().split(' ').slice(0, 1).join() || '0'),
rating: parseInt($(element).find('div[class="ratings"] p[datarating]').attr('datarating').trim() || '0')
})) || []
return res.status(200).send({
result
})
} catch(err) {
console.log(err.message)
}
})
app.listen(PORT, () = console.log(🚀 Server started at ${PORT}.))
The results are as below:
Conclusion
You now have a firm understanding of how we can scrape data from web applications in Node.js. In this article, we have seen how we can use Cheerio to scrape data from static websites.
Scraping web data using Cheerio works with static websites. However, this method might not work for dynamic websites, as in most of the frameworks, the website renders on the clientside.
Again, web scraping is against the terms and conditions of certain web applications. You should check whether you have permission to scrape the information from the website.
Regardless of its limitation, we can scrape the necessary information from other websites and store them in our database easily. Although we can’t fetch data directly from dynamic websites, there is a workaround to fetch data using Cheerio. Maybe I will discuss it in another article.
If you are interested, here is the complete project repository.
Have a nice day!