05/01/2024
Using AI to Enhance Applications of Career Theory to Practice
By Francine Fabricant
Many interventions used by career development professionals (CPs) are grounded in career theory. Thus, viewing a client’s case through a different theoretical lens can offer a shift in perspective and new ideas. However, many CPs may have limited experience utilizing a range of career theories in practice. Even experienced CPs might find themselves overly relying on theories that are most familiar. Artificial Intelligence (AI) can bridge this gap by allowing CPs to experiment with the application of theory to practice by using large language models (LLMs) to analyze real-world career concerns through a variety of theoretical orientations.
The Power of Large Language Models
LLMs, such as ChatGPT, Microsoft Copilot, and Claude, use AI to generate a response to a prompt with a series of words that mimic human language. In fact, research has shown that LLMs are so effective at mimicking human language that AI-generated text was perceived as more likely to be written by a human than human-written text (Jakesch et al., 2023). Nevertheless, an LLM’s response to a prompt may be helpful, incorrect, or even false. A useful analogy is to think of an LLM as if it were an intern that is so eager to please that it makes mistakes (Mollick, 2023). These mistakes are called “hallucinations” because AI may confidently present misinformation as if it were factual. Users can improve results by inserting additional prompts and checking other sources.
Using AI to Learn About the Application of Theory to Practice
For CPs interested in reviewing a client’s case through a theoretical orientation, LLMs can serve as a useful tool. Similar to case study examples, such as the video series created by participants in NCDA’s 2018 Counselor Educator Academy (Brooks et al., 2022), this approach allows CPs to learn from a case analysis of a real-world scenario. Unlike other resources, including written cases and videos, LLMs can provide rapid feedback regarding career concerns in real time, even while comparing multiple approaches.
Strategies and Recommendations for Career Professionals
A step by step approach for using ChatGPT or Claude to learn about career theory is presented below. However, these tips could be used with any LLM. In fact, trying these with a variety of LLMs will offer insight into differences in the results generated by AI models. To maintain clients’ confidentiality, identifying data should be omitted, and, for greater privacy, data training and conversation storage functions can be turned off on ChatGPT (Mauran, 2023). Privacy settings may change or differ among LLMs, so always check current resources.
- Assign a role for the AI. Assigning a role capitalizes on the LLM’s ability to access the language, tone, and content of that identity (Ramlochan, 2023). For instance, ask ChatGPT or Claude to “pretend you are a career counselor.”
- Provide a theoretical lens. Next, for example, ask the LLM to “assume the theoretical orientation of Chaos Theory of Careers.”
- Provide context while omitting identifying details. As one example, tell ChatGPT that “the following information is about a client who is exploring career fields.” Then, offer a description of relevant data, including interests, activities, values, experiences, and other information that helps set the stage for the specific career concerns you wish to explore. Alternatively, share a résumé with identifiers removed. Adding details about careers, education, or training that a client does not want to pursue can also be helpful.
- Ask for specific information and add the phrase “consistent with the theory.” To examine the case through a theoretical lens, follow each request with the phrase “... consistent with the theory” (e.g., “make suggestions for career exploration that are consistent with the theory” or “provide ideas for networking that are consistent with the theory”).
- Ask the LLM to explain. Follow up for greater detail, as in “Can you explain why the first suggestion is consistent with the theory?” and other “why” questions. Then, ask ChatGPT to repeat the process for another theory, as in “Please reinterpret the same case through the theoretical orientation of Psychology of Working Theory.” To examine differences, for example, ask ChatGPT to “Please compare how this differs from the prior analysis which utilized Chaos Theory of Careers.”
- Troubleshoot. Despite clear instructions, results may not be fully consistent with theory.
The CP’s expertise should guide their ability to question and challenge LLMs. For example, ask ChatGPT to “review the results and describe how they reflect the theory” and “acknowledge if there were any errors in the results.” For instance, in a recent trial, after the CP asked Claude to check its response for consistency with the theory, it responded, “You're absolutely right, my previous response did not fully align with the Chaos Theory of Careers framework. Let me try again, keeping the core principles of complexity, change, and uncertainty in mind:” This was followed by a new series of suggestions.
Fostering a Conscientious AI-Enabled Practice
Artificial Intelligence can provide CPs with a new method for applying theory to practice. Nevertheless, AI is still new and developing, requiring the CP to be cautious when implementing these strategies. Conceptualizing any interaction with an LLM as if it were an exchange with another learner encourages CPs to apply their expertise to the process, providing a check against errors as well as opportunities for greater insights.
References
Brooks, T. P., Hardaway, Y. D., Higgins, M., & Weingartner, A. (2022, August). Bridging the gap: From career counseling theory to practice. Career Convergence. https://www.careerconvergence.org/aws/NCDA/pt/sd/news_article/453794/_self/CC_layout_details/false
Jakesch, M., Hancock, J. T., & Naaman, M. (2023). Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences, 120(11), e2208839120. https://www.pnas.org/doi/abs/10.1073/pnas.2208839120
Mauran, C. (2023, April 25). ChatGPT rolls out important privacy options. Mashable. https://mashable.com/article/openai-chatgpt-chat-history-privacy-setting
Mollick, E. (2023). On-boarding your AI intern. One Useful Thing. https://www.oneusefulthing.org/p/on-boarding-your-ai-intern
Ramlochan, S. (2023, May 27). Role-playing in large language models like ChatGPT. Prompt Engineering Institute. https://promptengineering.org/role-playing-in-large-language-models-like-chatgpt/
Francine Fabricant, Ed.D., helps people rethink their career opportunities and build careers that are personally meaningful and rewarding. She is a certified career counselor and the lead author of Creating Career Success: A Flexible Plan for the World of Work. A frequent speaker on career topics, Francine addresses real-world concerns and offers practical ideas and solutions. Her research has examined how career counselors learn about the impact of AI and automation on careers. She has worked at Columbia University, Hofstra University, and the Fashion Institute of Technology (FIT). She received a BA cum laude from Barnard College as well as an MA in Organizational Psychology, EdM in Psychological Counseling, and EdD in Adult Learning & Leadership from Teachers College, Columbia University. Her community-based workshops have been profiled by The New York Times. She can be reached at francine@francinefabricant.com.
3 Comments
Michelle Tullier on Thursday 05/02/2024 at 08:26 AM
Such an innovative use of AI with important practical applications! Thank you, Francine for this valuable contribution.
Marcela Mesa on Thursday 05/02/2024 at 08:44 AM
It's important to note, that if anyone who uses these tools does not have the previous deeper knowledge of the subject, he/she will not have the criteria to decide, if what AI produces is useful or not. I think it could be useful in training and supervision groups, fostering a climate of critical thinking.
Anthony Musso on Thursday 05/02/2024 at 09:57 AM
Last week, for the first time, I "talked back" to AI. In the past, whenever it gave me something that I didn't like, I assumed that I didn't word it correctly and would just ask it in a different way. I was working on a resume and seeking alternative wording to what was already there. It reworded it as if the applicant was already working for that job, not the one being applied for. I then stated, "You wrote that as if the applicant is working there, and they are not." It responded, "You're right; here's the correct way to word it." This is a great article that highlights the need to use and challenge LLMs to make them stronger and more competent for future use.