SFK-GICRI

Introduction

The SFK Global Intelligence and Cognitive Reasoning Inventory is a hybrid of a traditional intelligence test, paired with an intelligence inventory. The SFK-GICRI aims to circumvent regular IQ tests that have right or wrong answers. In contrast, the SFK-GICRI has a preset score for each answer submitted by the candidate. The tiered scoring system allows the test to measure not only aptitude in the form of traditional IQ but also cognitive reasoning and thought processes that go into each answer choice. Each question on the SFK-GICRI has six (6) answer choices, a majority of which are reverse engineered from typical thought processes, which lead to each selected answer.

The SFK-GICRI is designed as a “pre-application” screening assessment, allowing prospective employers to assess potential candidate’s aptitude for a position before requesting work history and education information typically found on a resume.

Basis of “Pre-application” use, and minimizing pre-employment bias

The SFK-GICRI is atypical in that candidates take the assessment before, rather than after, formally applying for a position. In cases of the latter, employers often unintentionally exhibit several cognitive biases, which lead them to favor a non-optimal candidate: 

  • Framing bias – A framing bias occurs when decisions are made based on the way information is presented. For employers, this means a well-tailored resume can be more influential in drawing a different and perhaps more favorable conclusion about a candidate.
  • Narrative fallacy – A narrative fallacy occurs because of our human tendency to like stories, as they are easier to make sense of and more relatable. For employers, it means favoring a less optimal candidate based on the reading a well-polished and engaging cover letter.  
  • Anchoring bias– Anchoring refers to the notion of using pre-existing data as a reference point for subsequent data, which can significantly skew the decision-making process when selecting a candidate. Often, candidates with resumes indicating prestigious education or work history will anchor themselves as a desirable candidate based solely on their resume. While this provides insightful information to employers, it can skew their views of equally capable candidates who may have less prestigious schools and companies on their resume.
  • Confirmation bias – Based on the idea that people will seek out information from a pool that confirms their pre-existing ideas, confirmation bias can significantly deter employers from finding an optimal candidate. Suppose a candidate went to a similar school or worked for the same company as the hiring manager, such information could sway the hiring manager to see that candidate in a more favorable light. Confirmation bias is perhaps one of the more dangerous fallacies to hold, as it equates favorable subjective data to objective measures of aptitude. 
  • Representativeness heuristic – This cognitive bias occurs when employers falsely believe that if two candidates are similar in specific ways, they will perform the same. Often, employers fill open positions after a previous employee has departed the company. It is not uncommon then, for employers to prefer candidates with similar traits and experiences to the employee they are replacing. While on a personal level, this thinking seems rational, there is no objective data to verify the two candidates will perform the same.

With those biases in mind, having candidates take the assessment before formally reviewing their candidacy, ensures the initial pool of candidates exhibit the aptness and skills required for the position. The formal review can then be used more as a means of determining cultural fit within the company, rather than an objective measure of aptitude. 

Basis of differentiation

The SFK-GICRI differentiates itself from standard aptitude tests in several areas. 

  • Stress Factor – Traditional IQ and aptitude tests are designed with a “work under pressure mentality,” meaning candidates are often under severe time pressure to complete the assessment, often rushing through specific questions, or finishing as many questions as they can. While this may be a seemingly sound strategy, it creates an atmosphere of gamification of the assessments. The candidates focus then shifts to finding an optimal strategy for each of the specific evaluations; thus, the tests are no longer reliable at measuring the intended variables. While the SFK-GICRI has a time limit of one hour for 25 questions, it puts significantly less time pressure on the candidate, allowing for a proper assessment in typical working conditions.
  • Cheating Controls – Traditional tests typically require the candidate to take the assessment in a supervised environment, which can be costly for the employer. The SFK-GICRI is designed to be taken at the candidate’s leisure and has built-in controls for cheating. The questions are designed to be structurally unique, in that a simple web search will not garner the candidate any answers. Additionally, copy and pasting of items and images are much more difficult given the testing format. While the candidate can theoretically succeed at cheating given no time constraints, the one-hour time limit ensures the candidate will not succeed if attempting to use outside resources on more than a couple of questions. Furthermore, even if the candidate tries to cheat on a question or two, they will quickly learn the time required for such an endeavor defeats the entire purpose of cheating, thus forcing them to revert to taking the assessment in good faith.
  • Use of tools – Candidates are encouraged to use calculators and scratch paper for the SFK-GICRI, as it is assumed anyone taking an at-home test will. This test is a measure of work performance within reasonable time constraints. As such, candidates are encouraged to use simple tools that make simple calculations and visualizations easier to comprehend, as they would in any typical work environment. Traditionally, the use of calculators is banned in similar assessments as the answers to many questions rely on mathematical calculations. However, the SFK-GICRI, while consisting of a mathematical abilities section, focuses more on how candidates find patterns and use the numbers to draw conclusions. As such, the use of a calculator will not give one candidate a significant advantage over another who opts not to use any tools.
  • Tiered scoring and the hybrid inventory model – An inventory test typically has no right or wrong answers, and thus is more of a focus for personality assessments than traditional aptitude tests. The flaw in most aptitude tests is an absolute right or wrong answer, which does not take the candidate’s reasoning into account. Thus, in such trials, a lot of valuable data on the candidate is left on the table, and employers end up concluding the candidate with the highest score is will likely be a top performer. While aptitude tests correlate with performance to some degree, they do not necessarily select for the best candidate. Most often, they select the candidate best trained at taking standardized tests, which leads us to our last differentiating factor.
  • Non-standardized structure – Designed with no predictable or repeatable structures in mind, there is no specific “strategy” for taking the SFK-GICRI. There are no multi-million dollar test prep centers that economically privileged candidates can enroll in to give them a leg up on competitors. The SFK-GICRI tests global intelligence and cognitive reasoning abilities in a comfortable and relatively stress-free working environment, nothing more. This structure is highly beneficial to employers as it allows them to truly single out the highest performing candidates without having to worry about any test specific advantages the candidate may or may not have had.

Basis of use

The SFK-GICRI, as an aptitude assessment, is designed for employers to quickly pre-screen candidates before the application process. The assessment is designed for pre-application use, saving both the employer from having to sort through hundreds of resumes and the prospective candidates from having to tediously fill out an application for a job they may not qualify for

Problems with today’s hiring structure

Most jobs listed today carefully follow a standard format, asking candidates to pre-fill education, work history, distinguishments, and references, in addition to uploading a cover letter and resume. While this eases the screening process for employers, it negatively impacts the candidates, as it requires significant resources of time and effort on their behalf. While conventional wisdom states candidates who make such efforts in the application process are more serious about the job, that may not entirely be true.

For one, most job hunters apply to numerous positions in varying industries and functions, thus the assumption that a candidate making an effort via the tedious application process is false; they are merely playing the game as is written, and there is not much leeway around the process on their end.

Second, candidates are typically selected based on experience and keywords in their resumes. While this may have been a good strategy in the 1990s, humans can easily outsmart automated systems by tailoring their resumes to each specific position, placing relevant keywords and padding previously held titles and positions with adjectives favored by the position at hand. This process is known to create the false positive candidate, one who is successfully able to game the hiring process but lacks the aptitude required for holding the position. By the time the hiring company realizes this, they have already invested significant financial resources into this candidate.

Lastly, the hiring process of today may turn away top-performing candidates based on the effort required for the initial application. Most top performers understand quite well that hiring from an open application pool is a numbers game, and while a prospective top-performing candidate may be interested in an open position, they often conclude the time involved is not worth the single-digit percentage chance of being invited for an interview. As such, most top performers will not even apply if they deem the technical road-blocks are not worth enduring.

How the SFK-GICRI makes hiring better for both sides

The SFK-GICRI simplifies the hiring process by allowing employers to parse applicants based on required aptitude quickly and allows candidates to submit their interest in a position quickly once they have taken the initial assessment. This system benefits employers because now they have a qualitative and quantitative assessment of a candidate’s aptitude, something not easily found on a resume.

With this information, they can then request resumes and cover letters from candidates who are proven to excel at the position. Additionally, it benefits candidates, as once they take the assessment, they can quickly upload their certificate or input their confirmation number to verify they have taken the assessment and scored in the desired range for the position.

Interview

We have invited Sherafgan Khan, Managing Director of SFK Inc, to talk about the SFK-GICRI, a new aptitude test developed by his team, which allows employers to pre-screen desirable candidates based on a new model of evaluation.

Interviewer (IV): Sherafgan, thank you for your time. We’ll get right to it, can you tell us a little more about your company, and what SFK Inc. does?

Sherafgan Khan (SK): My pleasure, and thank you for having me. SFK Inc. is many things if I have to be blunt! We started as a small marine surveying and consulting firm, targeting logistics and transportation clients in the oil and gas industries. From there, we doubled down on our consulting arm, focusing more on supply chain strategies and risk management for a broader range of clients, while still holding a strong legacy client base. It wasn’t until 2015 that I wanted to take the company in a different direction, or “widen the bubble,” to be more accurate. We put more time and resources on data, analytics, and automation systems to help our clients with new and unique problems that weren’t around, say, ten to fifteen years ago. Today, a lot of our products and technology services have stemmed from specific client projects that we deemed would be beneficial to the public. That is, we didn’t think to create these services to sell in mass, though once we built the foundations, we figured, why not?

IV: Interesting turn of events! So I’m assuming this test, the SFK-GICRI, was one of them? Can you tell me a little more about how this test and how it came about?

SK: That’s why we’re here, right! The SFK-GICRI stands for the SFK (our company) Global Intelligence and Cognitive Reasoning Inventory, but that’s not what we called it when we first designed the fundamentals. Initially, it was a much more watered down aptitude test for a client in a particular industry. Our client was experiencing a high turnover rate for a specific group of positions and wanted a solution to find the diamonds in the rough, so to say. We built the initial test with their particular needs in mind, and once implemented, the 6-month voluntary turnover rate for that position dropped significantly, about 60 to 70 percent.

IV: So I know you can’t go too much into the details regarding your client, but was there a reason the turnover rate for that position was so high?

SK: Yes, we looked into the reasoning and precursors to an employee quitting, even followed up with some of them who were willing to speak with us. It wasn’t one specific position, instead, multiple positions in a vital area of their operations. We found the turnover rate was so high because of the stress and constant awareness required to hold that position. We found most of the people who left the job were too overwhelmed by its demands.

IV: So, in fear of asking a dumb question, was it not the employer’s responsibility to screen the employees properly, to deem them fit for such a position? Why was a test needed when they could have easily determined the employee was not qualified based on their work history?

SK: Well, yes, and no. Of course, the employer is responsible for doing their due diligence, particularly when they knew the demanding nature of the job positions. However, they did everything text-book: they looked at the initial applications, questioned those who stood out (via an informal phone interview), then gave them a cookie-cutter aptitude test and interviewed those who passed. The ones they liked, they hired. But they were doing it all wrong if a low turnover was their goal. The problem with screening resumes is the ease of manipulation. Candidates can buff their resumes with keywords the recruiter’s software program is looking for, and even stretch the truth about the demanding nature of their work experience. The thing you have to realize is that many people like the idea of their job being demanding, and often they convince themselves of just that. So now that they have bypassed the initial screening, they move on to the phone interview, which only really confirms their interest and qualifications, and have them take one of a handful of generic aptitude tests. Of course, they pass those with ease.

IV: I see, well tell me more about these generic aptitude tests, why are they so easy to pass?

SK: Perhaps I misspoke. I don’t mean to imply that all aptitude and IQ tests are easy; rather, the people applying for these positions already have a level of aptitude and skill required for the job, but that doesn’t tell you the whole story, unfortunately. The standards that test these candidates confirm they have the intelligence and critical thinking abilities for these specific positions; however, they rarely examine the thought processes and cognitive reasoning abilities which led the candidates to their answers. Most aptitude test questions have one right answer, and if they [the test-taker] get it right, they’re deemed knowledgeable about that question. You don’t see many pre-employment tests that dig into the process of how they came up with their answers if they are wrong.

IV: Correct me If I’m wrong, Sherafgan, but I’ve taken several intelligence/aptitude tests, and the wrong answers usually are pre-planned in the sense that if a misstep in a calculation or logical reasoning happens, that answer is present to deceive the tester, correct? Say if a question is (2+2)*5, [open parenthesis, 2 + 2, close parenthesis, multiplied by 5], the correct answer is 20. Still, 12 will appear in the answer choices to target those who followed the order but forgot to apply the brackets.

SK: Yes, and that is precisely the problem with most of these tests. The creators spend a lot of time deceiving the candidate with false answers that correlate to flawed logic. While this may be good for traditional IQ tests, it’s a flawed model when it comes to measuring aptitude for a specific position. The faulty logic model tells the test maker that the candidate missed a logical step, and penalizes the candidate for it. The SFK-GICRI focuses heavily on lateral models of thinking, where several answers choices to a question may be correct, but only one is optimal. This lateral model takes the candidate’s thought process out of the all or nothing mentality and shifts it to a cognitive reasoning mentality.

IV: If I understand you correctly, the SFK-GICRI still has an “optimal” answer, hos is this different from a “right” answer or “best” answer on a traditional test? And can you clarify what you mean by a lateral model?

SK: The “optimal” answer for each question varies based on the position tested. While each question will have the “most right” answer, conventional to an IQ test, the SFK-GICRI focuses more on the right thought process for the job at hand. So while an optimal solution may correlate with higher IQ (when not corrected for the specific position at hand), merely having a higher IQ doesn’t deem one candidate more fit for a job than one with a lower IQ, so to say. What’s more important is the thought process that led the candidate to select one answer over another. While the higher IQ person may have chosen his or her answer based on strict logical reasoning, an “optimal” candidate may select a solution based on a form of thinking that the higher IQ person failed to exhibit. Think “out of the box” mentality. To the person of higher IQ, that answer would be wrong, because he or she had a stricter problem-solving approach. A lateral model then is precisely the “out of the box” style of thinking needed for a specific position. Think of logical as a direct approach to problem-solving, and lateral as a more indirect and creative approach. 

IV: Very interesting, this reminds me of an article I read a while back regarding the recruitment of police officers, and how they specifically wanted people with lower IQs on the police force. Does this fall in line with what you are testing?

SK: That’s a good example you’ve brought up, but I need to clarify some things. News headlines are notorious for taking complex research conclusions and watering them down with dramatizations. It’s not that police departments desire “low IQ” officers or even that police officers have a lower IQ than average. They score slightly above average if we want to get technical! Those headlines center on broad conclusions drawn from theoretical models, given specific parameters at hand. In this case: a standard intelligence test given as a pre-employment test to prospective police officers. The logic that followed was that those with IQs in the “very high” or “extremely high” ranges would get bored with police work over time, as they deemed the work not challenging enough for those with exceptionally high IQs. While this logic does correlate to some degree on a broad scale, it does discriminate, since IQ is not the sole indicator of whether an officer would get bored with his or her duties over time. The flaw in this model was the test given: a standard IQ test. The IQ test just measures raw intelligence on a mostly standardized scale, relative to the entire population. It does not measure passion, lateral thinking, or whether the officer would get bored with the job. It draws causal conclusions based on correlations within the population, and correlation based decisions are often deemed “good enough,” but never optimal.

Simply put, high IQ correlates with boredom in police work, but it is not the cause of boredom. They just didn’t have a test that could measure what they wanted. Thus, they had to rely on correlations drawn from a broad measure (the IQ test), and imply causation based on that fallacy.

IV: Wow, well, I certainly have a new outlook on police officers! So if they were to take the SFK-GICRI, how would the conclusions differ?

SK: Well, it’s funny you ask because we designed one specifically for police officers. Unfortunately, as with any government agency, the bureaucracy and red tape make it challenging to implement change on a large scale. While some departments loved the idea, it was not feasible for them to implement it into their hiring process. Many decisions and processes are standardized at the state level, and there are numerous contracts in place for testing agencies, psych evaluations, you name it. Try going to a psychiatrist and telling them a for-profit business developed a test which out spars the time tested methods they’ve grown used to, and you’re going to have a bad time. So we took it to a private security firm, where it was much easier to implement.

IV: Well? Don’t leave us hanging, Sherafgan!

SK: Of course not! We learned quite a bit from this firm. It turns out the high IQ discrimination is not only about boredom, but also about following directions, and not making too many autonomous and unilateral decisions. Similar to those in the military, officers, security guards, and service members are most efficient when following a strict chain of command, and executing orders as told. This philosophy ensures departments and units are running effectively, and enforcing rules and laws consistently. Our testing found that those with traditionally higher IQ, tend to favor more autonomy and are not too keen on the idea of following orders, particularly from someone they deem “less intelligent.”

Additionally, those with higher IQs tend to have a higher capacity for individual-level problem solving, but not high enough to forecast the consequences of their decisions down the road. Of course, this is all correlation based strictly on IQ. Our test indirectly measures intelligence, but more importantly, measures aptitude and qualifications for the job at hand, in this case, a security guard. There were many with significantly higher IQs who we deemed more fit at excelling at the job, with minimal boredom. Keep in mind IQ is just one measure of a plethora of variables that go into determining candidate success at a job. The broad conclusion of our test for this security firm discovered that a higher IQ correlated with less than stellar performance (based on pre-defined variables), but was not the cause. Again, it was the thought processes that determined, with high accuracy, whether a candidate will excel and stay happy. Had they used the traditional IQ test measure, they would have eliminated a handful of exceptional staff, and taken on an equal amount of unfit candidates.

IV: So, essentially, you’re saying your field study concluded the higher the IQ, the less likely they would be to excel as a security guard? How was it different than before?

SK: Throwing the hard punches, I love it! Again, the IQ portion was simply a correlation and not the cause, and I want to reiterate that, as it is pervasive for news outlets and media to fall into the correlation-causation fallacy. You can think of it as “good enough” versus “optimal.” For most employers, good enough is, well, good enough. They have a pre-set notion that many will fall through the cracks, and they will replace the employees as needed. This basis relies on the false assumption that changing methods would lead to a higher increase in costs than keeping things the same, though they fail to consider the long-term costs of this strategy. While the “good enough” method of merely bench-marking IQ works for many, as I’ve alluded to, this leaves great candidates on the table, while less than stellar candidates take their place. In a broader business context: your competitors will take those stellar candidates, which you’ve refused in favor of the less stellar, essentially doubling down on your “competitive disadvantage.” 

IV: Ah, that makes more sense when you put it that way. So I looked over the white paper you’ve authored before the interview and noticed you use the words “inventory,” “evaluation,” “assessment,” and “test” interchangeably. Also, you mention the SFK-GICRI is “non-standardized” yet administered as a multiple-choice test to people in varying industries. Can you elaborate on these? 

SK: Yes, so evaluation and assessment all simply mean “test” or the SFK-GICRI, in this context. Inventory, on the other hand, is a specific word used to describe a particular type of test. Think of the MMPI, the Minnesota Multiphasic Personality Inventory. This test is a test that assesses personality traits and psychopathology. It’s specifically called an “inventory” because it is more of an assessment of a patient’s mental health, and there are no right or wrong answers. It is a self-report test that requires a clinical followup by a licensed and trained psychologist. The SFK-GICRI is self-described as a hybrid model between an inventory test and an aptitude test. Merely calling it one or the other would be wrong, as it has strong traits of both. While there are “optimal answers,” which in layman’s terms simply means “right.” Getting the right answers is not enough. We’re looking for the proper thought process, thus the decision to call it a hybrid model. Concerning the non-standardized structure, we simply mean that the test isn’t one size fits all. It is tailored specifically toward job function, and candidates taking the same test for the same job function will rarely see a significant overlap on the questions asked.

IV: Thank you for that clarification, so if it is not standardized as you say, how is it a reliable measure of aptitude between two employees?

SK: Think of it this way; the term “non-standardized” is used more for semantics than a clear cut definition of test conditions. In terms of fairness and reliability, yes, it is “standardized” in that all candidates in a pool get the same base questions, albeit a few changes we call “placebo variables.” These placebo variables are simply variations in the numbers used or the initial shapes, tables, or diagrams shown. The structure of the questions and the measurable process for answer selection do not change. It just doesn’t meet the dictionary definition of a standardized test, in which all test-takers in the pool are taking the same test in the same conditions. People will mostly take this test at home or work, a relatively comfortable environment. Again, it’s standardized where it matters, in terms of time limits, number of questions, and the base criteria we are measuring.

IV: I apologize for sounding like a broken record, but if it doesn’t truly fit the definition of “standardized test,” how can it accurately measure potential employees against one another?

SK: No need to apologize! You are asking relevant and important questions. As I’ve alluded, the SFK-GICRI measures candidates against one another where it matters. We have bench-marked this test with various IQ and aptitude tests and compared the different variations that make our test “non-standardized.” We’ve found that the testing environment or placebo variables had no significant effect on repeatability and validity of the SFK-GICRI in tested candidates.

Simply put, those who had an IQ of say, 115-125 on traditional tests, performed equally well on the SFK-GICRI in all its variations for a different job function, when we only observed at IQ, IE, getting those “optimal answers.” That doesn’t mean all were qualified for the job at hand, as variations within the answer choices determined optimal fit. Alternatively, when we bench-marked against happiness and job satisfaction inventories for a specific job function, ignoring IQ, we found those with the highest satisfaction and happiness in their current jobs tested favorably for positions in their respective job functions, regardless of IQ. This tells us the test is both valid and reliable at what it is intended to measure. 

IV: Pardon me, I think my brain just exploded! So, your alluding that your test, the SFK-GICRI, measure employees based on how they came to solve a problem, not necessarily if they got the right answer. I’m going to assume there aren’t many tests out there that do this. How do you go about deciding what scores and methods are optimal for different industries, and mainly, the different jobs in those industries?

SK: Well, I suppose that’s the magic of our test, and what differentiates it from others! I can’t spill all of the secrets regarding the design and scoring mechanisms of the SFK-GICRI, but I can say that job functions between industries rarely differ. We often think someone working for a refinery has a vastly different experience than one working at say, a bank, but it’s just not true. The industry rarely, if ever, changes the core functions of a job. A salesperson selling business software may have a different experience if she transitions to, say, selling medical devices. However, the core functionality is still the same: client relationships, leads, and, most of all, selling the companies product or services.

Conversely, a secretary for a law firm versus one for a robotics manufacturing firm, again, different day to day experiences but the same core functions. He will still answer calls, make appointments, and draft reports. This overlap of experience makes our process much easier when you think about it. Variations in job function, for the most part, do not differ much between industries. Now when you look at all the different types of job functions, there is a lot of overlap. Think of a massive Venn diagram of every possible job function; those bubbles will not be equally spaced out. Accountants, financial analysts, and investment bankers will all be part of a more massive bubble, and they will likely share parts of that bubble with data scientists, statisticians, and math teachers. As a human, its easy to be overwhelmed by this seemingly “tower of babel” task, but we’re in 2020, and technology has made things a bit easier. We [SFK Inc.] have invested serious time and resources in the last several years into automation systems, data-driven solutions, and handling enormously large sets of raw data and drawing meaningful conclusions from them. The SFK-GICRI is, basically, an accidental child of all that work.

IV: Fascinating, and I love the accidental child analogy! I know we don’t have much time left, but I wanted to quickly touch on something I read in the white paper about cognitive biases, and why the SFK-GICRI is given BEFORE an applicant even applies to the job. Can you go over this reasoning?

SK: I’d love to. The one key administrative feature of this test is that it is designed to be taken before candidates formally apply, IE fill out all their history, upload cover letters, etc. It’s counter-intuitive, I know, but there is a good reason for it: applying to jobs sucks. People invest a decent amount of time applying to positions they are only marginally interested in, often just to get a foot in the door of employment. Not to go into too much detail, but its flawed conventional wisdom that states a candidate willing to put time into an application is more serious about the job.

Additionally, the application process is flawed, as I’ve mentioned earlier in that candidates can slightly alter their resumes and experiences to be a more desirable candidate. The SFK-GICRI requires applicants to be tested before applying to short-list an optimal group before all the hiring formalities. The cognitive biases take place in many different circumstances. Maybe the employer is impressed by a Harvard degree and places that candidate in a more desirable light? Perhaps they [the candidate] went to the same school as the hiring manager, thus another bias. The white paper goes over all of these in a bit more detail, but the point is that humans are terrible at being objective when it comes to other humans. We may think we are fair and logical, but you’d be surprised at how the most trivial variables can have a massive impact on our decision-making abilities.

IV: The paper also mentions it is easier for the candidate, but how is taking an hour-long test better than a 20-30 minute application process?

SK: Well, that’s where things get a bit gray. Ideally, this test would be a precursor to all employment applications, but I know that’s not a likely possibility. Say, several employers adopted this model, and it grew to be more accessible. Prospective candidates would only have to take this test once, and submit their scores instead of retaking it. Their scores would be verified by our internal system to validate their aptitude for a specific job function. The great thing about this test is that we only need to modify the scoring algorithm on our end to benchmark it with a specific job function. Although, if a candidate is applying to multiple and significantly different job functions, that should be a red flag in and of itself. It’s not far-fetched for someone in an accounting function to transition into finance, but if a mechanical engineer is trying to apply for a sales position, yea, a few red flags will pop up. In the early stages, yes, applicants will likely spend more time taking this test than applying, but the feedback we have received fell more in line with the words “fun” and “unique.” It’s a breath of fresh air for job applicants to deviate from the norm in terms of application procedures, so I’m not too worried about the test being an added burden.

IV: Well, Sherafgan, I could talk to you more about this all day, as it is truly fascinating work. Unfortunately, we are running behind schedule. Thank you so much for your time! We look forward to seeing what’s in store for the implementation of the SFK-GICRI. For those interested in learning more, you can read their white paper at https://research.sfkcorp.com/SFK-GICRI/

SK: My pleasure, thank you for having me.