Introducing our new partner – SAIO

This year, XELTO DIGITAL signed a partnership agreement with the SAIO. Due to our continuous development and care for our customers, we are constantly expanding our partner network in order to be able to provide our customers with the best solutions.

We would like to introduce our new partner:

SAIO, a comprehensive platform for business process automation, has been developed in ING Bank Śląski in Poland and is used widely across the ING network. It combines the experiences of the Polish ING branch in the field of Robotic Process Automation (RPA) with Artificial Intelligence (AI) technology, to provide comprehensive automation services – in areas such as HR, accounting, purchasing and sales, operations, customer service, Know Your Customer (KYC), IT, and many others. SAIO implementation leads to improved business results, cost savings, reduced risk of human error and increased employee satisfaction.

SAIO is a universal tool that can be used by many industries. It works well in finance, accounting, HR, customer service, purchasing processes, logistics, IT or operations management.

The company was named among the 23 market leaders in the RPA supplier market in the Everest Group RPA Products Peak Matrix® Assessment 2022 report. Both the automation implementation part of the business and the one dedicated to providing RPA technology – the SAIO platform – are oriented towards the global market.

We take this opportunity to wish ourselves and our new partner every success and many projects completed together.


Monika Serafin – Service Delivery Manager, XELTO DIGITAL

Wiktoria Movsisyan – Jr Business Development Manager, SAIO

Source: Materials provided by the partner

New image of artificial intelligence, that is a few words about ChatGPT

Artificial intelligence (AI) has recently been one of the hottest technological trends. It has been the subject of interest, regularly posted by news portals, yet never as popular as in the fourth quarter of 2022. It is all about ChatGPT, that is a software able to answer any question asked by the user. What is it? How to use it? Who can benefit from that and who can lose? Read below for answers.

ChatGPT phenomenon – what is it all about?

ChatGPT won a million users in only 5 days. To see how enormous this achievement is, let’s just compare it to other leaders. Instagram waited 2.5 months, Facebook 10 months, while Netflix over 3 years to gain so many users!

From the technological point of view, AI is not a novelty. Studies on this technology have been conducted for a few dozen years. The world has long been using artificial intelligence on a daily basis. Think about face recognition in mobile phones or adaptation systems in cars. Why ChatGPT is getting so popular?

What is ChatGPT?

ChatGPT is based on GPT included in the name, that is Generative Pre-trained Transformer. This is a language model, developed by OpenAI, that allows you to automatically generate texts based on input data. It has been ‘trained’ on the basis of a huge collection of data. Thanks to this, it is highly able to forecast words and sentences on the basis of the preceding text.

ChatGPT relies on GPT-3 language model, published in 2020. It consists of as many as 175 billion(!) parameters.

GPT models learn from an enormous base of online sources. They have indexed:

  • many popular scientific platforms, e.g. Wikipedia;
  • all available resources of texts and books, including academic works,
  • public online platforms and news services;
  • various blogs.

Around 45 TB of data in total! It must be emphasized that these data are not imported on an ongoing basis. This means that you cannot get information on current events until they have been reprocessed by ChatGPT.

Importantly the first version of GPT model emerged in 2018. This demonstrates how fast such technologies grow and proves that new solutions will emerge faster and faster. The conclusion is that we are just witnessing a new revolution of artificial intelligence.

What can you use ChatGPT for?

This technology can be used in diverse ways. At present it is most often used to study machine learning and natural language.

The Chat GPT model is very flexible. It can be used in various situations that require human communication skills. And this means a big business potential as well!

It can be used to do the following:

  • generate answers to users’ questions in customer service systems;
  • create automatic answers for chatbots;
  • write answers while talking to people over the Internet;
  • create contents for PC or mobile application chatbots;
  • automate translations.

The output content is impressive as it proves correct in terms of language and natural in terms of sounding. Obviously grammatical mistakes occur – particularly in Polish that is one of the most difficult languages in this respect.

Is ChatGPT ready to replace a human interlocutor?

A group of scientists from the Faculty of IT and ICT at the Wrocław University of Technology checked ChatGPT under the CLARIN initiative. The team asked AI over 38 thousand questions in 25 categories.

Their goal was to check:

  • how the technology could react to sarcasm,
  • if it could construe a broader context of speech,
  • if it was able to comprehend jokes.

The results were comparable to the ones applicable to other currently best natural language processing models (the so-called SOTA: state of the art).

According to scientists from Wrocław University of Technology, ChatGPT performed a bit worse than other technologies of this sort. The studies also demonstrated that it was very useful for quick information search. However, if you prioritize high-quality conclusions, search results may prove insufficient.

The key advantages of ChatGPT proved to be awareness of the context and possibility of customizing answers. On the other hand the chat was unable to catch up in case of sudden change of subject.

To summarize, ChatGPT is a huge opportunity. Yet its implementation entails numerous hazards. If used, it may cause users not to feel required to browse through sources of information.

The changes that arise from use of advancing technologies also apply to the social sphere. Just have a look at results of using a common GPS navigation or systems that automatically suggest ‘best-matching’ products in online stores. In this context come questions about freedom of choice or human skills. The ones that we abandon – that is using map on one’s own or ability to move in the space and searching for information by ourselves.

In spite of this, modern technologies have a big potential and people need to keep reasonable while facing them.


Author: Monika Serafin – Service Delivery Manager




Technological trends – what to expect in 2023

In the IT industry, a new year sounds pretty interesting. The importance of automation, artificial intelligence and big data analysis as well as new hacker protection methods is on the increase. Following the Gartner Institute’s estimations, global expenses on IT will rise by over 5%, i.e. 4.6 trillion dollars, in 2023. The value of IT market in Poland will be 56.6 million zlotys, with a nearly 10-million increase until 2026. According to the Deloitte’s report, new technological trends will dominate the months to come and will soon have a considerable influence on development of companies. The following three aspects will drive the aforesaid growth: technology management, digital security and platform modernization.

TECHNOLOGICAL TREND NO. 1 – METAVERSE, that is when enterprise start using AR and VR capabilities

Metaverse is a virtual world that can be used via AR application and VR headset. So far such experiences have been limited to the entertainment. However this technology has enormous potential. It is estimated that until 2026 around 25% of consumers will have spent in virtual reality about one hour a day.

At present only 30% of companies use this technology and have ready-made products and services. Interestingly METAVERSE is becoming more and more frequently used for business. Some enterprises have already started taking advantage of ‘unlimited virtual reality’ in order to build business models.

Current trends show that the Internet will soon turn to be a great place to interact with customers. Thanks to such an approach, it will be possible to rebuild commitment and customer loyalty as well as test new ideas.

TECHNOLOGICAL TREND NO. 2 – ARTIFICIAL INTELLIGENCE, that is how to place unconditional trust in artificial intelligence

Enterprises that use AI (artificial intelligence) successfully make themselves and their instruments trustworthy. And this is what gives them a competitive edge.

Artificial intelligence-based algorithms can perform tasks similar to humans’. Thanks to this, they are allowed to take decisions that go beyond simple business rules, relying on machine learning and artificial intelligence.

The companies that intend to develop and expand the impact of the artificial intelligence in their businesses need to know the significance of trust in AI. It is as important as the number or force of algorithms. For this reason customers will choose technologies that prove credible. If this criterion is not met, they will not be willing to use them.

TECHNOLOGICAL TREND NO. 3 – CLOUD, that is how to control chaos over clouds

Presently many enterprises already use cloud solutions and their use potential is getting bigger and bigger. Sadly numerous businesses feel overwhelmed by the number of cloud applications.

The solution to this problem may be a multicloud. Following the assumption of this technology, it is all about optimizing complexity of cloud environments and cloud management merger based on one control panel. It provides many cloud platforms with an access to common services.

Cybersecurity is important with regard to using this instrument. In view of the risk, not all data can be managed under multicloud.

TECHNOLOGICAL TREND NO. 4 – PEOPLE, that is how important IT personnel flexibility is

A rapid technological development also entails huge challenges to people and their skills. Engineering qualifications quickly become outdated. Organizations are not able to hire new employees who have specific skills applicable to every technological shift. This is why experienced managers opt for flexible team members who can quickly adapt to changes and expand their competences.

Thanks to this, modern companies are capable of shaping, supporting and retaining the best specialists. Such entities provide themselves with a permanent access to professionals by offering attractive terms of employment and a well-thought-out organizational structure. It is based not only on IT capabilities, but also interpersonal skills. This allows employees to flexibly develop in the field of new technologies inside the company.


Blockchain is a list of records interconnected to one another with the use of cryptography that is used in applications, business models, systems, IT architecture, supply chains and cybersecurity. The systems that are based on this technology have become crucial in creating and monetizing digital assets. They are also adopted in building digital trust.

A rising awareness of the potential of this technology derives from comprehensive understanding of its role in fostering the credibility rise. This solution brings invaluable benefits to a growing group of businesses.

TECHNOLOGICAL TREND NO. 6 – SYSTEM MODERNIZATION, that is how mainframe is gathering momentum

In view of fast-changing technologies, it is no longer profitable to get rid of outdated and rarely used systems. Therefore companies are eager to modernize them and combine with new technologies.

The mainframe technology is a guarantee of a few-dozen years’ support to the existing software. Thanks to this, it allows operation of some IT solutions that date back to the previous century.

A large group of businesses hold that unquestionable advantages of the mainframe technology cannot be overestimated. Single mainframe servers prove best-efficient on the market. Moreover they assure flexible scalablity, high availability (HA) and uninterrupted operation in case of major failures (DR). It must be noted that the mainframe technology is continually subject to improvements.

As you can see, current technological trends place a big emphasis on automation and system integration on one hand, and security on the other. In effect it seems profitable to follow them in the long term.


Author: Monika Serafin – Service Delivery Manager



Jakie będą trendy technologiczne w 2023 roku?,441848.html

Error management – the most important and difficult stage of automation

Robots, just like us, not always do keep an eye on our surroundings. The difference between them and us is that a robot has to anticipate all potential failures and has to be prepared (with help from a developer, of course), for the upcoming problems. I used to learn that error management is a key factor of an automation project, like 80% of it. I fully agree. Besides, when an error handling is a missing part, then I – a robot, may not work as requested. The crucial points are the interactions with humans. When input data is not extracted or downloaded from applications but delivered by a human, which means this input is prepared manually, then there is a high risk that the robot will receive erroneous data or data related to the scenario that is not handled. In that cases, preventive actions have to be introduced.


From Lean Six Sigma methodology comes an approach called Poka-Yake (in japanize: mistake-proof).  It prevents functionality from incorrect usage of the given tools. For instance, when a robot is to be started by means of a mail sent by an operator, an Excel sheet is prepared as a template for a user to provide input data. Based on a control XLSX file, a robot processes a data validation and yet after this, it will run the planned steps. The solution can be designed based on a Visual Basic for Application or data validation procedures. As a result, a user receives an efficient tool that simplifies his work, and at the same time prevents a robot from committing errors. Naturally, it is tended to lead to the situation when erroneous orders will not get the robot at all. When a preliminary data validation is not possible on the operator’s side, then the same verification process must be triggered on the robot’s side.

Application environment and the robot’s work

The next source of procedure errors is missing control over an environment the robot works in. For instance, the status of the robot station is unknown:

  • Did the process that had been running previously leave behind a mess?
  • Were applications necessary for the run started?
  • Are licenses available for each of the applications in use?
  • Are RAM resources sufficient?

We may encounter errors during a process start as well as during running. For the process to start properly, the robot has to have a possibility to prepare a work environment, that is to close all unnecessary applications running both in the robot’s process and in other processes started for the machine robot is working on. It is important to close the applications from the relevant session when we let several sessions on the machine run in parallel. Unfortunately, the Kill option does not distinguish between an application we have just started and an application that was started by another user, so we have to be careful not to let the robot close the functions dedicated to processes different than ours.

Ready for every scenario

How to recognize an application’s bottleneck. Even if we know the way a given program works well, it does not mean that this program works identically on the client’s side. First of all, you have to work over the process with a business analyst to recognize and name critical points for which there is a risk of failure in the case of manual operations. Knowing these a good developer can define process steps that require special attention. Fortunately, there is a lot of different tools to manage such scenarios that can be combined in numerous ways. For instance, for the try-catch structure, we can select more than one action scenario depending on the type of error returned to let some of them be processed once more. So, the error of type Selector Not Found can lead to the conclusion that web application has not started processing yet. Then we have to wait for the next 30 seconds. Whilst the other errors will terminate work on transaction and robot will reach for another task from the queue.


Any long-lasting analysis guarantees that a robot is well prepared for any scenario. That’s why there is very important to build such a robot structure, that can anticipate a different kinds of errors unknown for now that will be handled by default. This default way of treatment can be logging out from the application operation, closing a browser, and process restart with an order next in line. We have already mentioned our Framework – click to recall – that is used in each automation we deliver. One of its key factors is this default error handling. Thanks to these facilities a job can be continued even if an unpredictable scenario arises.

Standardized error nomenclature

Last but not least issue in error handling is communication. Depending on a given scenario a problem may be temporary or require investigation from an operator or developer. Xelto Digital Framework has a mechanism built in to distinguish between system errors and business errors. Business errors are dispatched to users working with robot whilst system errors are a matter of provider repair actions. An error naming is a very important element introduced to the Framework and helps to organize error handling over the processes in which several robots are involved. This way there is of no importance in which robot a particular error is found – the error meaning and way of handling are what matters and these are the same for all robots.

Most important and most difficult

At the bottom line, error handling is one of the most difficult and crucial steps in automation design. Automation has to be multilayer starting from input data validation and online connection checking, throughout bottlenecks monitoring to unpredictable scenarios handling.

Author: Rafał Korporowicz – Senior RPA Developer

Part 13. SAP – and nothing else needs to be added

The first season of the series about my adventures has just come to an end. So I would like to briefly sum up what happened in the last couple of months. If you followed my footsteps, you know perfectly well that I have turned from a regular robot into a fairly avant-garde Ed Robotowsky. I have been wearing a suit for a while now, my manners are impeccable, and I love to meet new people and join the teams of our Clients. At the end of the first season, I will tell you a secret. In the XELTO DIGITAL team, I am Ed, but, after automating the processes and joining the teams of Our Clients, I get a new name from my new Colleagues and embark on new, even more interesting adventures with each of the teams. If you are curious about the names I received so far, including female ones, follow the second season. In the meantime, let’s find out, for the last time in this season, about mine and Kamil’s opinion on the methods of process automation in the SAP ERP system.

 With the Expert’s eye:

ERP systems are a regular feature in IT resources of large production companies that we meet in the context of automation of most processes. ERP systems follow business resources, such as cash, raw materials or production capacity, and the status of business liabilities, e.g. orders, purchasing and payroll. The applications comprising the system share these data to create an integrated and constantly updated view of the basic business processes with the use of shared databases kept by the database management system.

As a pioneer in creating such software, SAP was and still is the leader on the market. When implemented properly, the SAP ERP system is characterized with highly developed work culture and stability, which makes it a good basis for automation. Suffice it to say that the system itself is designed so as to enable the automation of some actions: it has a built-in register of macros which can be used also in our robots, but let me get back to this later.

NOTE: It needs to be remembered that to use all the blessings of automation, we need to first activate it at the server. This happens with the RZ10 transaction where we must edit the appropriate parameters.

An unquestionable benefit of automation of the SAP ERP system is a wide range of methods available for the purposes of automation. Robots created with the use of the UiPath may draw upon the whole database of universal actions which usually communicate with the SAP interface without any problems and create very stable and unambiguous selectors.

Apart from the default actions, the UiPath additionally prepared the suite of actions dedicated to SAP. These actions support the most frequent actions, such as system login, call for relevant transactions or readout of current message on the status bar, etc. They make the work on automation even more effective, and we do not have to spend time on programming all actions from scratch.

Another method I already mentioned is to record scripts with the use of macro register in SAP and to call them out with the code module. In this way, we can perform actions that cannot be performed at the time when we use the interface in a standard way (e.g. checking several columns in the table at once), or we can perform them more effectively. It may also be a kind of workaround when the basic actions do not fulfil their purpose, but this does not happen very often.

To sum up: the processes supported by the SAP ERP system are well suited for automation after meeting several other conditions. The way the application works with the UiPath and the stability of created solutions allow us to think that the processes automated in this way will cause no problems in use and will not require much maintenance outlay.

Author: Kamil Ga؜wl؜ista – RPA Developer

Foto: Freepik

Part 12. Edward explores the secrets of the LEAN SIX SIGMA philosophy

During the holidays, most of our clients rested taking advantage of the holiday season. Whereas, I had more time to broaden my horizons. Thanks to the kindness of my teammates, I learned a lot about automation and more. My friend Rafał agreed to share his knowledge about the LEAN SIX SIGMA philosophy with me.

The Expert’s eye:

The people I often talk to ask me if a given process can be automated? To some extent controversially, I always answer yes. Nevertheless, I always add that the more important question is whether the process can be automated in its current form. At this point, the situation is usually already complicated and it appears that automation is only the next step to be taken on the way to a specific destination, but certainly not the first one.

It is said that the only constant in life is change.

The same applies to processes that are constantly evolving through changes in the environment in which they work. These can be legal, technological, internal structure or customer profile changes. For example: working with paper documents is giving way to their digital counterparts, which has forced the implementation of digital signature tools.

This also leads to the deployment of a methodology to address the implementation of Continuous Improvement. One of them is the LEAN SIX SIGMA, which is a combination of the Lean and the Six Sigma concepts which  tries to implement and combine the best solutions from both approaches. It is based on several key issues.

Genichi Genbutsu – freely translated as “go and see”. The essence of this approach is that managers should not only spend time behind the desk, but also go and see the causes of a problem situation at its source, i.e. where the specific product is being manufactured. The point is that, in order to solve specific problems, it is first necessary to thoroughly familiarise oneself with each stage of the process, carry out an analysis and draw conclusions. For business process automation this means that managers and directors should also be familiar with the processes performed by their subordinates. This makes it easier for them to identify which processes should be automated first from the perspective of the company strategy. Decisions should be based on the following steps:

  1. Set the goal
  2. Select the area
  3. Visualise the process
  4. Observe and ask questions
  5. Review the observations
  6. Identify the differences between the actual situation and the desired situation
  7. Present the results

The sequence of these successive events is very similar to the next Lean Six Sigma issue, which is the DMAIC cycle consisting of five phases.


The Define phase allows you to determine exactly what the problem is and what is necessary to solve it. The goal set like this should be SMART, i.e. specific (S), measurable (M), achievable (A), relevant (R), and time-based (T).


This allows you to move to the next phase, that is Measure, where we describe how to make observations, and then make appropriate calculations to determine the current process efficiency.


This leads directly to the next phase – Analyse.


At this stage, we already have the basic data available to make a decision, which allows us to implement the necessary changes in the Improve phase. The devised solution should be tested, checked, and finally implemented. To do this, you can use the PDCA cycle and the FMEA.


The last phase is Control, which is designed to verify the assumptions which have been made and to determine whether the specific problem has been solved.

The already mentioned PDCA, known as the Deming Cycle, is based on dividing the implementation of an improvement into four stages:

  1. Plan – placing the main focus on what is not working properly
  2. Do – implementing changes as a test
  3. Check – reviewing the results to see if the test has passed
  4. Act – implementation into production

Methodology circularity is characteristic here, since the end of phase Act should result in starting phase Plan again. However, if the experiment is unsuccessful, you should skip the implementation into production phase and move back to preparing a new plan – until the problem is solved.

How can the FMEA be implemented in all of these? FMEA (Failure Mode and Effects Analysis) is an analysis of the types and effects of possible errors and is aimed at taking preventive actions to prevent the effects of defects that may occur at both the design and manufacturing stages. The main assumption is that approx. 75% of errors occur in the production preparation phase. On the other hand, they are detected only at the production phase and during the operation (approx. 80% of errors). How does it work? In the first stage, we first need to set our goal, gather the team, decide what we are going to analyse, break the process down into its constituent parts, and finally collect the data. They in turn will serve us at the time of both the qualitative and quantitative analysis. The final step is to plan corrective actions, implement them, and observe.

All the elements which I have written about can be implemented very well both in the deployment of further automations and in the optimisation of already existing processes. Before a programmer begins to work, he or she has to go through the Genichi Genbutsu procedure together with the analyst to get a good understanding of the process to be automated. Then they create a master work plan which is very similar to the way the DMAIC cycle works. The programming itself takes the form of: Plan => Program => Test => Deploy, which is deceptively similar to the PDCA methodology. At the same time, the programmer together with the SME and the analyst prepares the list of potential errors and determine how to handle them in accordance with the FMEA method. Perhaps not everything runs by the book or contains standard documentation, but the hard core of these methods is always the same.

Author: Rafał Korporowicz – Senior RPA Developer

Part 6. White List. Can it be verified differently?

I remember how all entrepreneurs began their adventure with the White List. Of course, everything started with a great deal of chaos. Sometimes I wonder if every change has to start this way. But in fact, changes are good. At least, I like it very much when something is still happening. Going back to the White List, everyone was afraid, nervous about how it would function. And it turned out that the devil’s not as bad as he seems. All in all, Bond did not use this saying, but I’m sure he had it in his blood. Sitting with Przemek and sipping coffee, I heard a bit about the White List:

With the expert’s eye:

Since 2019, there is an obligation to verify counterparties with the so-called White List in Poland. It is a government list (search engine) of all companies, indicating their status as a VAT payer.

There are solutions in the market that allow for the mass polling of the White List database via the API, although they are dedicated to and tailored-made for a specific customer and ERP system. They also have certain limitations, directly derived from the principles of the law, with regard to the maximum number of daily enquiries that can be made, as well as the maximum amount of data to be processed (maximum 300 records).

As XELTO DIGITAL, we have developed another way of verifying the White List based on an automated search on the website, without using the API.

The user provides a list of tax ID numbers and/or bank account numbers of the counterparties. Then the robot checks each number one by one on the VAT taxpayers’ search engine page and collects the necessary information such as: the status of the taxpayer, validation whether the account number(s) is (are) on the list of verified accounts, the search date and the unique ID. Next, it saves this data in an output file that can be placed in any location or emailed to the ordering person.

In contrast to the already existing solutions, the advantage of this one is its considerable freedom in configuration for the customer, enabling them to tailor it to their specific needs.Another great advantage is circumventing the limits that exist for API queries, because in this solution we simulate human operation, opening the website and searching for payers ‘manually’.

If the process is automated using the user interface, it is slower than sending queries via the API.  Yet, it is still many times faster than a person would do it, and you can avoid the limit on the number of searches. Technically, the process development requires attention to be paid to selectors of the search buttons – after the first search, the button moves to a different location on the screen, and although it looks exactly the same, its selector is different in subsequent queries. If the input file contains both the tax ID numbers and the bank account numbers, the list should be separated in the robot code.

The robot first checks the bank accounts and only then the tax ID numbers, thus avoiding unnecessary switching between screens to search.

Author: Przemysław Wal – RPA Developer


Part 1. ReCAPTCHA without secrets.

I’m Robotowsky… Ed Robotowsky. I know, it sounds like a story about the famous Agent 007. But I can tell you that my adventures are just as intriguing as those of the main character in Ian Fleming’s novel. In addition, I am a huge fan of James Bond and I love technological innovations like he does. We also have another common secret, but more about that another time. Well, let’s start from the beginning.

In the coming weeks, I’d like to take you into the fascinating world of automation that I have become boundlessly absorbed in. I could tell you about all the innovations and possibilities offered by automating processes endlessly. Thanks to working with the XELTO DIGITAL team, I can draw knowledge from our experts on an ongoing basis and share it with you in my blog:

With the Expert’s eye:

Does a robot saying “I’m not a robot” depart from the truth? Working with Web sites and Web applications is one of the basic things that robots have to cope with in their everyday work. But what if we have an ideal candidate: a simple, structured process with digital inputs, and the famous “I’m not a robot” prevents us from reaching our goal? Fortunately, it is not the end of the world (nor our project), because we have several options.

First of all, especially if we are working on an internal application, we can contact the administrator to request a version of the site without reCAPTCHA.
Typically, though more often than not we will have to deal with this issue ourselves. At present, we can use machine learning solutions which are either free or do not cost very much. Let me pass over the paid ones, as a lot of them can be found on the Internet, and the rules of their operation will be best introduced by their vendors. However, before you use them, test the Buster plug-in: Captcha Solver for Humans. It is available as an add-on to both Chrome and Firefox.

How does it work?

The plug-in adds a new button to the reCAPTCHA window. Clicking it will activate the audio version of reCAPTCHA, listen to it furtively, and then enter the correct solution to the task. I should add that this is a button that our robot can click without any problems!

What does it look like?

When our robot activates reCAPTCHA, there are two options. The best solution for us is when the mechanism will simply let us pass. But if not, we will see a well-known window with images. After the installation of the plug-in, a third orange guy button appears, which is just our solution. When the robot presses this button, the automatic task solution is activated. The only thing that remains now is to properly build up the developed procedure to monitor the reCAPTCHA behaviour.

Is this an ideal solution?

No. With a large number of logins, reCAPTCHA, we will treat us as an attack on the site and block any attempts to connect to it. In addition, while working on the active screen is very good in both Chrome and Firefox, moving it to a virtual machine makes communication difficult. My experience has shown that this task is better handled by Firefox using the ‘send window message’, preferably in version 64, because there is an error causing the plug-in to fail to respond in later versions (fortunately, restarting the browser should resolve this). Nevertheless, this method should help to overcome this problem in the process automation.

Author: Rafał Korporowicz – Senior RPA Developer

Withholding Tax – What does automation help you with? (I)

Searching the foreign databases of contractors.

 Withholding tax is a kind of CIT tax which is applied to cross-border transfers made from Poland. It is a form of income tax (applicable for legal persons and natural persons) collected by withholding agents on certain revenues (including, but not limited to, dividends, interest, royalties). CIT withholding tax is levied on a transaction when the recipient of a transfer (who becomes the taxable person) has a different tax residence than the sender of a transfer (who pays the tax). Poland has signed double tax treaties. According to these provisions, the Polish sender of a transfer for services provided by a foreign contractor paying withholding tax must know the tax data of the contractor whom the tax will be charged to.

In view of the new regulations, Polish tax payers who make cross-border payments of more than PLN 2 m (in a given year to a given payee) are obliged to collect withholding tax at the appropriate rate, i.e. 20% or 19%.

The collection of the tax can be avoided if the management of the Polish entity making the payment makes a statement that, among other things, it has verified with due diligence that the foreign entity receiving the payment is the beneficial owner of the payment and runs actual business activity. However, if this is the case, the liability for uncollected tax and risk in the event of a future dispute with the tax authorities is transferred to the management of the Polish withholding agent. The application of withholding tax (WHT) exemption will also be possible after obtaining an individual opinion from the tax authorities confirming the relevant status of the foreign recipient of payments made from Poland.

Due diligence involves checking and retrieving contractors’ tax data, such as tax identification numbers, business addresses, and residence addresses. For an employee of the company that pays withholding tax, this means hours spent on various foreign tax services, searching the databases of registered contractors, and retrieving their tax data. The more countries of origin for contractors, the more data there is to be checked; Europe, the United States or Japan – each area has its own tax reporting services, and it is a very time-consuming task to request data from these services. For a company purchasing multiple services that are subject to withholding tax, this means a lot of time spent preparing tax data.

In this case, automating the search process for contractor data is a huge benefit. Ed Robotowsky’s job is to relieve the employee of the company by taking over the process of checking and retrieving the data of contractors from different countries and tax areas.

The data to be processed in the automation process is a list of contractors and a list of countries and tax services for these countries, and the robot’s activity is to provide the employee with data divided into two groups: the data which can be obtained publicly and the data which are accessible after a certain amount is paid. The whole operation takes place, so to speak, in the background of the company’s activity. The robot operates smoothly and provides the necessary reports for analysis.

Then, all the employee has to do is to assess the relevance of the paid data and request the robot to download the one which they decide to pay for. In this way, the robot with a mechanism to make payments to the specific bank accounts of such services and receive the paid-up tax data continues to operate. The final report provides data that meets due diligence requirements for verifying a foreign contractor.


Author: Monika Stawicka – Business Analyst