Wednesday, 15 March 2017

Securing your data in the cloud

Organisations are continually moving business applications and services to the cloud. Alongside the growth of remote workers within an organisation, securing and controlling access to cloud-based infrastructure and services has become increasingly challenging.

While some organisations have mature Identity and Access Management (IAM) solutions protecting internal systems; with the rapid adoption of cloud, many are using these existing policies to secure the cloud. This is not the way to approach the issue. Cloud must be treated for what it is, a different solution which requires its own policies and controls.

Risks and threats

Often cloud providers will have their own security controls in place to protect their services. However, businesses must be aware it is their responsibility to protect their own data in the cloud. As such, the security controls provided to an end user are usually limited and in some instances, simply do not exist. Some of the most common risks to cloud-based services can be overcome by ensuring an IAM solution is in place.

The most common risks which can be reduced through an IAM solution are:
  • Poor identity and access governance and management
  • Data breaches caused by poor credentials and identity management and procedures
  • Unsecure user interfaces and API
  • Compromised accounts
  • Insider threats

Whilst an IAM solution will provide the ability to reduce these risks and threats, unless it is combined with a mature strategy and the correct processes and procedures, the reduction of risk will be far less.

The key consideration when moving to the cloud is to evaluate and understand the gaps in existing process, policy and procedures, the potential need for additional security controls and the requirement for detailed planning and project governance is critical. If these key actions are carried out it will ensure any adoption of cloud services or infrastructure is a success.

To read our full paper, ‘Securing the Cloud’, click here.

Tuesday, 7 March 2017

Open Banking initiative: What does this mean for the UK banking sector?

By: Barry O’Donohoe, Co-Founder, RAiDiAM Consulting

A report from Identity and Access Management specialists Ilex International and RAiDiAM Consulting looks at the upcoming Open Banking legislation and the impact on UK banking organisations.

The Open Banking initiative has formed as a result of the UK Competition and Market Authority’s (CMA) latest effort to promote increased competition and consumer choice among banking service providers. In addition, the CMA intends to expand upon the European Banking Authority’s Payments Services Directive 2 (PSD2) directive by being more definitive in specifying the technological implementation of standards.

These APIs will transform the existing relationship between banks and their customers and raise complex identity assurance and access management challenges. Providing a standard set of APIs will be challenging for many functional and technical reasons. Perhaps most challenging from a security perspective will be the replacement of bespoke application protection mechanisms, protocols and internal standards with a single modern Identity and Access Management (IAM) capability that can integrate with third parties.

Open Banking in action

Open Banking API offerings are broadly categorized into three services: Public information, account information services (AIS) and payment initiation services (PIS). The CMA’s high-level roadmap schedules the delivery of APIs in the order of their security or risk levels. APIs requiring no security to implement will be delivered first, starting with the delivery of financial product descriptions and ATM / branch locations by the end of Q1 2017.

Achieving assurance in a headless world

These days, customers almost always interact exclusively with banking services via first party channels, whether mobile, telephony or Face2Face. Such channels require customers to perform an appropriate degree of identification and verification before services or information is provided.
Alternatively, with an API channel consumed by third parties, bank’s will need to address use cases where TPPs are performing operations on a customer’s behalf when the customer may not be present during the course of the transaction. Banks must adjust security postures to reflect the loss of control, quality assurance and variable degrees of app security that may be used by customers to access banking services.


Digital identity assurance is leading to a change in the industry. The coming swarm of digital financial asset management APIs will enable new and innovative services to be deployed at a pace previously unseen in the financial services industry. API delivered services have the potential to significantly increase the threat surface banks are exposed to and pose new challenges for identity assurance. Delivery of an API channel will require significant investment in IT Security and IAM infrastructure. It will also require the re-engineering of business processes to manage the numerous new identity classes and their authorisations.

To read the full paper, ‘Open Banking and PSD2: An Inflection Point for Digital Identity Assurance’, click here.

Monday, 19 December 2016

Time to re-engineer Identity and Access Management

IAM is historically driven by compliance and user provisioning. It had a very limited scope of coverage in terms of applications, a low return on investment and provided very restricted controls and views of access.

As more business applications and services move to the cloud and mobile working increases, controlling access to data and keeping it secure is challenging. Ilex International has re-engineered the approach to IAM, enabling organisations to benefit from the evolving technology landscape, whilst maintaining security in a much simpler way.

Written in conjunction with our UK consulting partner, Rivington Information Security, our latest paper, 'Time to re-engineer Identity and Access Management' explores the current challenges faced by organisations and explores a new approach to IAM covering:
  • Mobile security
  • Cloud security
  • Universal access management
  • Standardised identity management

You can download the paper here.

Thursday, 17 November 2016

Single Sign-On (SSO): How to differentiate the good from the bad when it comes to user experience and security

Steve Mullan
UK Operations Manager
Launching a Single Sign-On (SSO) project in an organisation is often linked to the user’s dissatisfaction with the current IT system, and the need to remember countless logins and passwords to access everyday applications.

In the absence of an application designed to remember IDs and passwords and input them automatically for the user, many users tend to bypass security policies. They choose either weak passwords (few characters, simple, easy to guess),  write them down, or give them to trusted colleagues while they are out of the office.

For the Chief Information Security Officers (CISO), the absence of SSO results in unacceptable weaknesses including incompatible security strategies and password policies for each application, varying levels of security and complicated audit or traceability. Last but not least, the cost to businesses is very real. Industry analysts maintain that the cost to reset user passwords and having the necessary support teams in place represents several dozens of euros per user and per year for large organisations.
All of these issues can easily be resolved with the implementation of a SSO solution. SSO considerably improves the user experience by only having to authenticate once, via the primary authentication method defined by the company. The SSO solution then automatically authenticates the user across other applications they want to access, using their secondary credentials. There is no need to remember multiple passwords, SSO takes care of it. This provides added peace of mind for both users and the IT security team.
Once the budget for an SSO project is approved, the CISO typically manages the implementation and oversees the project. This process should always start with answering some basic questions, such as how to differentiate the good SSO solutions from the bad ones?
A bad SSO solution is limited to solely SSO features. A good SSO solution adds to the basic functions already in place with additional features essential to IT security. These features include strengthening authentication mechanisms and access control logics – limiting authenticated user’s access to specified applications – and tracing access to all protected applications..
Before automatically authenticating a user on an application, a good SSO solution will strongly authenticate them and control their rights. It then improves the user experience and traces their activities including authentications, authorisations and delegations. Security is therefore maximised before (through authentication and access control) and after (through traceability) the SSO operation takes place.

What to look out for in a good SSO solution 

  • Authentication: As access to all authorised applications is automatic once the user is primarily authenticated, it is essential to ensure that the primary authentication is properly secured. Any SSO solution should offer several one, two or three factor authentication methods (something they know, something they have and who they are). This will depend on the security method used including smartcards, USB keys, etc. and the sensitivity of each application. Authentication must also be possible across all devices (fixed workstations, laptops, tablets and smartphones), on web portals accessed from inside or outside the organisation and in virtualised environments.
  • Access control: Once the user is authenticated on the SSO solution, it should be possible to grant access, or not, depending on various criteria. This includes the level of primary authentication (one, two or three factor), time slots, origin IP/DNS, user profile provided by the company’s directory or the access rights management solution and type of device (PC, tablet, smartphone).
  • SSO: SSO must be accessible across all applications including web applications, client server applications, virtualised, mobile, internal or external, in the office or at the partner’s location, in SaaS or Cloud mode, controlled or not, etc. In short, it must cover Web Access Management, Enterprise SSO and Identity Federation. Organisations using only web applications or external applications are largely a thing of the past. 
  • Traceability: For all protected applications, reports, audits, authentication statistics, authorisations and delegations should be visible from a single source.

Good SSO serves all purposes: users enjoy quick and easy authentication to all authorised applications and the CISO can apply comprehensive security policies covering the entire IT system. SSO strengthens authentication and allows for traceability to all controlled applications. In addition, the Chief Finance Officer (CFO) can considerably reduce the cost of managing and renewing user passwords.

Why choose between user comfort and increased security when you can have both?

Tuesday, 13 September 2016

Mega breaches: What’s behind the headlines?

Nowadays, barely a day goes by without an organisation getting hacked. In this age of ‘big data’, cyber criminals can compromise almost any type of personal information. As technology continues to evolve, so do the number of routes of entry for criminals to gain access to sensitive information. These attacks are also increasing due to more businesses using the cloud, adopting Bring-Your-Own-Device (BYOD) and other connected objects.

Why do attacks happen?

More often than not, hacks are conducted with criminal intent. Hackers are on the look-out for what will benefit them – financially or otherwise. The cyber crime landscape is always changing and organisations can find it difficult to stay ahead. There are a number of forms hacks and cyber-attacks can take, including:
  • State-sponsored attacks/cyber espionage: Considered by many to be the new form of inter-state spying. This is usually to uncover state secrets or areas of interest that may be useful to the country carrying out the attack
  • Insider threats: Insider threats are attacks carried out – both accidentally and maliciously – by those within an organisation. The risk of insider threats is on the rise, with 64 percent of security professionals saying insider threats occurred more frequently in 2015[i]
  • External attacks: On a basic level, these are attacks by anyone outside of an organisation. However, beyond that the reasons behind external attacks can differ greatly – state-sponsored attacks are an example. More usually, external hackers are simply cyber-criminals out for personal financial gain.
Our paper, ‘Mega Breaches: Behind the headlines’, examines the rise in mega breaches, why they happens and examines some of the most highly publicised mega breaches of the past couple of years. The paper also explores what steps organisations can take to mitigate the risk and protect sensitive data.

[i] Insider Threat Report 2015, Computer Weekly:

Thursday, 7 July 2016

Ilex International and Goode Intelligence explore the future of mobile security

Ilex International has worked with mobile security research and consultancy specialist, Goode Intelligence, to develop a white paper which explores the increase in mobility – and why it’s still not fully accepted by the enterprise. For many, this is because they do not want to sacrifice the usability at the risk of inadequate security controls. With 80 percent of adults expected to have a smart phone by 2020, in the paper, Goode Intelligence looks at the future of mobile security and how a next generation security solution should operate – introducing Ilex International’s Sign&go Mobility Center.

Research suggests the number of smart mobile devices (SMDs) managed in the enterprise increased by 72 percent from 2014 to 2015. While a significant increase, this number shows that businesses are still not embracing mobile to its fullest potential and making devices more readily available in the workplace. This is due to technology, security and regulatory concerns held by the IT department.

So, what is the solution? There is no doubt enterprises face a significant challenge in providing improved applications through mobile. Goode Intelligence believes convenience, mobility and strengthened security all need to be considered factors – while meeting company legislation. Goode Intelligence has researched this topic in-depth since 2007 and considered five characteristics to be key to next generation mobile security:
  1. Focus on users
  2. Agile multi-factor authentication (MFA)
  3. Mobile Single-Sign-On (SSO)
  4. Protect data
  5. Simplified unified security
Ilex International’s Sign&go Mobility Center is a solution that combines all of the features of a modern mobile security solution without the pain of having to mix and match separate tools into a unified service. Find out more about Sign&go here.

Read the full Goode Intelligence report here.

Thursday, 30 June 2016

Are your identity and access management systems effective?

Information is the life blood of all organisations. It is essential to measure tangible objects and also to recognise the intangible impacts so as to understand their effects on organisational decision-making.

Establishing an understanding of the effectiveness of an organisation’s identity and access (I&AM) control systems is not straightforward. It is also equally challenging to identify the efficiency with which these systems meet the desired levels of effectiveness. For example, organisations requiring their employees to recall numerous identifiers and associated passwords has complex impacts on security effectiveness and hinders employee productivity.

CISOs and other cyber security professionals often acknowledge the challenge in obtaining clear visibility of which approved users in their organisation (and in their partners/agents organisations) have authorised access to applications/resources in their IT infrastructure.

I&AM practitioners assess the effectiveness and efficiency of identity and access management systems using ten broad evaluation themes.

1. Functional requirements

This theme determines the extent to which an organisation’s I&AM systems fulfil their functional requirements (e.g. supporting user enrolment, credential distribution etc.) to manage users’ (e.g. employees, agents, customers etc.) access to applications/resources. Functional requirements are derived from an understanding of several factors, including business operational requirements, the characteristics of the user communities and their devices, the applications/resources needing protection, and the technological and regulatory constraints of the operating environments. Political and stakeholders economic interests may also influence an organisation’s functional requirements and also their performance requirements to mitigate identified risks to their assets.

2. Performance requirements

Performance requirements relate principally to the accuracy and the speed of an organisation’s I&AM systems to authenticate approved users. While biometric user authentication systems strive to meet tough imposter detection and genuine user authentication threshold rates, the true accuracy of some knowledge-based user authentication systems are often masked, i.e. passwords can be phished.

A requirements evaluation should determine acceptable and realistic accuracy/throughput rates, based upon the practical experience in the intended operational environments and the organisation’s risk mitigation strategy. These rates should not be set by vendors that have not been corroborated. Empirical evidence suggests that, for some biometric authentication systems, insufficient thought has been given to setting acceptable performance in relation to risks. The inevitable result is that the performance of some biometric-user authentication systems often falls short of an organisation’s expectations.

3. Regulatory alignment, including privacy protection

This theme is designed to assess the ability of an organisation’s I&AM systems to demonstrate their compliance with data protection, privacy, social accessibility and discrimination legislation. Equally, this assessment needs to establish the degree to which deployed I&AM systems comply with an organisation’s governance and security policies, or possibly international standards.

4. Technical reliability 

This theme evaluates an organisation’s I&AM system’s assurance capabilities to resist attack and/or errors and to detect when its user authentication method has been compromised. An assessment needs to identify unauthorised user attempts in order to establish the resistance capabilities of its user authentication methods to defend against various types of attack, to ascertain the difficulty of producing artefact and/or credential data to circumvent the user authentication system.

Tests planned for an assurance assessment require substantiated data from audit logs which record the user access events and the corresponding administrative actions. The tests should take place during planned day-to-day activities and should additionally allow for unexpected events. Assurance testing should involve individuals from the intended user community in their operating environment in order to augment assurance test data produced under controlled conditions.

5. Usability of the user authentication methods

This assessment theme is designed to assess the usability of the deployed I&AM system’s user authentication method, particularly regarding the alignment of the user interaction dialogues with the users’ everyday tasks.

The inadequacies of HCI security designs often dilute the effectiveness of preventative controls. Despite these usability design deficiencies, security effectiveness is improved by enabling users to make informed decisions from having a better understanding of a device’s security operations.

Knowledge based authentication systems mainly attract user password management problems. Increasing the number of password attempts could help users’ chances of recollection success. However, this strategy may marginally increase the opportunity of an external adversary obtaining that authentication data.

6. Accessibility of the user authentication methods

The criteria in this theme are designed to gage the extent to which the organisation’s deployed user authentication methods exclude certain members of the user community.

An organisation’s I&AM system’s user authentication method may require individuals in the user community to possess specific technologies, sensory skills and/or cognitive skills. Equally, some individuals may fail to enrol for some biometric systems because they are unable to provide signals of sufficient quality, e.g. capturing fingerprint minutiae. Some customers may simply refuse to use an Internet Service and the associated I&AM system due to the unacceptability of some biometric modalities.

7. I&AM system’s manageability

This theme is designed to assess an organisation’s ability to manage the computer-application systems, networks, devices and other components\technologies, during the anticipated lifetime of its deployed I&AM systems. The competencies of the personnel required to support the organisation’s I&AM system’s components may lead an organisation to seek cloud-based user authentication systems.

8. Technical and non-technical vulnerabilities

This evaluation theme relates to the identification of deficiencies in the organisation’s I&AM systems and the potential impact in the event that the organisation’s I&AM systems are not able to function fully as designed.

This assessment includes the protection of the authentication data upon which user authentication takes place. Non-technical vulnerabilities include the likelihood of user error. Users’ capabilities to memorise multiple or complex passwords may lead to undesired behaviour, i.e. saving passwords on devices for easy access.

Additional controls may need to be introduced to minimise the impact of the identified vulnerabilities. However, this invariably increases the expenditure required to mitigate the risks associated with user access control.

9. Identified issues

This theme is designed to evaluate the issues identified during an assessment of the organisation’s operational usage of its I&AM systems. According to many security practitioners, all I&AM systems possess vulnerabilities, attract issues and incur costs.

Again, organisations may suffer additional costs in their attempt to reduce the impact of the issues associated with their deployed I&AM systems. The costs associated to mitigate residual risks and identified issues should be proportionate to the value of the assets which are to be protected.

10. Stakeholders’ costs

This theme is designed to review the costs (both direct and indirect) of their I&AM systems to manage risks and to fulfil organisational objectives.

Direct costs relate to the expenditure (capital and operating) of the identity and access management systems themselves. These expenditures include software components, infrastructure costs (network, PKI, etc.) and also support, including personnel costs. Indirect costs relate to the losses and recovery/compensation expenditure associated with access control security breaches. Lost productivity may also be construed as an indirect cost.

These cost elements are essential for decisions relating to the deployment of I&AM systems for risks mitigation versus costs considerations, or for security return on investment predictions.


Acquired data needs to be evaluated in an analytical framework in order to make sense of information collated from a variety of organisational perspectives, e.g. business activities, risks management, legal and regulatory compliance, IT operations etc. The qualitative data acquired assists in explaining quantitative data gathered during an evaluation.