DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/25 has been entered.
Remarks
This action is in response to the request for continuation received on 12/22/25. Claims 1-8, 10, and 12-22 are pending in the application. Claims 9 and 11 have been cancelled and claims 21 and 22 have been added. Applicants' arguments have been carefully and respectfully considered.
Claim(s) 1, 2, 4, 6, 7, 10, 12, 13, 15, 17, 18 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Ghandi et al. (US 2023/0394391), and further in view of Ahmed et al. (US 2022/0245161) and Hajarnis et al. (US 2024/0330863).
Claims 3, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ghandi in view of Ahmed and Hajarnis, and further in view of Saldivar III et al. (US 2025/0061529).
Claims 5, 8, 12, 16, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Ghandi in view of Ahmed and Hajarnis and further in view of Dolga et al. (US 2025/0123841).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4, 6, 7, 10, 12, 13, 15, 17, 18, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Ghandi et al. (US 2023/0394391), and further in view of Ahmed et al. (US 2022/0245161) and Hajarnis et al. (US 2024/0330863).
With respect to claim 1, Ghandi teaches a method comprising:
extracting a key phrase from an online profile of a user of a professional social network (PSN) (Ghandi, pa 0028, user information 138 may be collected and/or gathered to model user interest to identify content that is most likely to be of interest to the user. User information 138 may include, but is not limited to, a user id 138A, explicit user interests 138B, browser search history 138C, search engine clicks 138D, search engine queries 138E, other content 138F consumed by an application utilized by the user, and/or other user metric information 138G ( e.g., dwell time, telemetry data, etc.) that may be used to model user behaviors and user interests.);
identifying a set of skills associated with the key phrase (Ghandi, Fig. 2, Automatically extract a first set of skill keywords from the candidate details & pa 0038, reskilling recommendation program 110A, 110B may, for example, extract 'Neural_networks', 'nip' , 'excel' , 'machine_leaming', and 'c++' as skill keywords 310 from Candidate A);
creating a skill-centric digital representation of the user based on the first set of skill embeddings (Ghandi, pa 0037, At 208, reskilling recommendation program 110A,110B automatically compares the word embeddings generated at 206 and calculates cosine similarity scores for the first and second set of skill keywords);
based on the skill-centric digital representation of the user (Ghandi, pa 0037, At 208, reskilling recommendation program 110A, 110B automatically compares the word embeddings generated at 206 and calculates cosine similarity scores for the first and second set of skill keywords) identifying a subset of digital documents to present to the user via the PSN (Ghandi, Fig. 2 & pa 0040, Finally, at 212, reskilling recommendation program 110A, 110B automatically generates and outputs reskilling recommendations to the user based on identified skill gaps. In embodiments, the reskilling recommendations may include, for example, recommended courses, assignments, or alternative open jobs.).
Ghandi doesn't expressly discuss creating a skill embedding using a first LLM and a skill embedding prompt, wherein the skill embedding prompt comprises an instruction formulated to cause the first LLM to generate and output the skill embedding based on a natural language description of a skill, wherein the natural language description of the skill is generated and output by a second LLM; storing the skill embedding in a store of skill embeddings, wherein the store of skill embeddings comprises links between skills and skill embeddings; retrieving, from the store of skill embeddings, a first set of skill embeddings corresponding to the set of skills, wherein the first set of skill embeddings comprises the skill embedding; creating a skill-centric digital representation of the user based on the first set of skill embeddings retrieved from the store of skill embeddings.
Ahmed teaches storing the skill embedding in a store of skill embeddings, wherein the store of skill embeddings comprises links between skills and skill embeddings (Ahmed, pa 0030, The document embedding, and in some instances the generated and/or extracted document understanding information is stored in the content index 136.);
retrieving, from the store of skill embeddings, a first set of skill embeddings corresponding to the set of skills, wherein the first set of skill embeddings comprises the skill embedding (Ahmed, pa 0031, The content identification & ranking module 144 receives a user profile including the user embedding and identifies a plurality of documents that are semantically similar to the user embedding. For example, a nearest neighbor search may identify a pool of content items having document embeddings that are semantically close, or otherwise semantically similar, to the user embedding.);
creating a skill-centric digital representation of the user based on the first set of skill embeddings retrieved from the store of skill embeddings (Ahmed, pa 0031, a nearest neighbor search may identify a pool of content items having document embeddings that are semantically close, or otherwise semantically similar, to the user embedding. In examples, the nearest neighbor search is one or more of an approximate nearest neighbor search (ANN), a k-nearest neighbor (KNN) search, and the like. The pool of document embeddings is then ranked based on a plurality of factors. In non-limiting examples, the document embeddings are ranked based on relevance, novelty, serendipity, diversity, and explainability.).
It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Ghandi with the teachings of Ahmed because it provides an indication of content that indicates high quality sites that match the user’s interest (Ahmed, pa 0022).
Ghandi in view of Ahmed doesn't expressly discuss creating a skill embedding using a first LLM and a skill embedding prompt, wherein the skill embedding prompt comprises an instruction formulated to cause the first LLM to generate and output the skill embedding based on a natural language description of a skill, wherein the natural language description of the skill is generated and output by a second LLM.
Hajarnis teaches creating a skill embedding using a first LLM and a skill embedding prompt (Hajarnis, pa 0095, a large language model (e.g., GPT-3.0 or GPT-4.0) is applied to each of the talent profiles of potential candidates in the database to generate context dependent summaries of talent profiles. These summaries can be expressed in text, or can be encoded in the embedding vector representation of the large language model. A different large language model may be used to generate the embedding.), wherein the skill embedding prompt comprises an instruction formulated to cause the first LLM to generate and output the skill embedding based on a natural language description of a skill (Hajarnis, pa 0073, At 1201, an initial step in the operation may include a reception of an input prompt by the LLM. At this stage, the LLM is configured to receive input in the form of natural language text from a user or an automated system. & pa 0074, At 1202, the LLM may process the received prompt. The prompt may be segmented into individual tokens via tokenization for efficient model processing. These tokens may be transformed into vector representations through embedding, which encapsulates semantic and syntactic language attributes), wherein the natural language description of the skill is generated and output by a second LLM (Hajarnis, pa 0084, Processing device 102 may use a separate or equivalent LLM to format of the customized job description to a form that is suitable for question-and-answer tasks.).
It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Ghandi in view of Ahmed with the teachings of Hajarnis because it reduces the workloads of humans and provides minimal bias (Hajarnis, pa 0022).
With respect to claim 2, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 1, further comprising: sending a digital invitation to the user to contribute digital content to at least one of the documents in the identified subset of digital documents via the PSN (Ghandi, pa 0039, A user may define a suitable skill satisfaction threshold … reskilling recommendation program 110A, 110B may automatically generate and output explainability statements 330 to the user identifying which skill keywords were categorized as similar skills, and which were identified as skill gaps.).
With respect to claim 4, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 1, wherein a skill embedding of a skill is pre-created by the first LLM by:
sending a first prompt including the skill to the first LLM; receiving a natural language description of the skill from the first LLM in response to the first prompt (Hajarnis, pa 0084, Processing device 102 may use a separate or equivalent LLM to format of the customized job description to a form that is suitable for question-and-answer tasks.);
sending a second prompt including the natural language description of the skill to the first LLM (Hajarnis, pa 0073, At 1201, an initial step in the operation may include a reception of an input prompt by the LLM. At this stage, the LLM is configured to receive input in the form of natural language text from a user or an automated system.); and
receiving the skill embedding from the first LLM in response to the second prompt (Hajarnis, pa 0074, At 1202, the LLM may process the received prompt. The prompt may be segmented into individual tokens via tokenization for efficient model processing. These tokens may be transformed into vector representations through embedding, which encapsulates semantic and syntactic language attributes.).
With respect to claim 6, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 1, wherein the document comprises digital content generated by a generative artificial intelligence model (Hajarnis, pa 0078, At 1205, the LLM may generate output text by sequentially predicting a most probable next token given the context of the input prompt and the tokens generated. … The output generation process leverages the LLM's learned linguistic patterns and knowledge, enabling it to compose text that aligns with the initial prompt's intent. [0079] After an LLM is trained, the LLM may be deployed on a computer system for execution. Referring to FIG. 1, Responsive to the enriched prompt, processing device 102 may execute the LLM engine to generate a deep and customized job profile.).
With respect to claim 7, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 1, wherein identifying the subset of digital documents to which the user may contribute digital content via the PSN comprises: algorithmically matching the skill-centric digital representation of the user with the skill-centric digital representation of the document (Ghandi, pa 0037, At 208, reskilling recommendation program 110A, 110B automatically compares the word embeddings generated at 206 and calculates cosine similarity scores for the first and second set of skill keywords).
With respect to claim 10, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 7, further comprising: configuring the algorithmic matching based on engagement data associated with the PSN and at least one of the user or the document (Ahmed, pa 0031, The content identification & ranking module 144 receives a user profile including the user embedding and identifies a plurality of documents that are semantically similar to the user embedding. …a content item should be relevant to the user; the content item should preferably not have been previously seen by the user;).
With respect to claims 13 and 15, the limitations are essentially the same as claims 1 and 4, and are rejected for the same reasons.
With respect to claims 17 and 18, the limitations are essentially the same as claims 1 and 4, and are rejected for the same reasons.
With respect to claim 22, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 1, further comprising: determining a skill expansion example based on historical engagement data (Hajarnis, pa 0085, Talent profiles stored in an information system 114 may be made available to users (e.g., the hiring managers or the talents themselves) to review and edit. The employee talents may use the talent customization system to build their career profiles by providing supporting documents such as resumes, cover letters, etc. in addition to curating or updating experience, education, skills and capabilities information.); and constraining the generation and output of the natural language description of the skill by providing the skill expansion example to the second LLM (Hajarnis, pa 0079, Responsive to the enriched prompt, processing device 102 may execute the LLM engine to generate a deep and customized job profile. Compared to drafting by a human user, the job description generated by the LLM engine may include information specified in the original request and information that was not specified in the original request but instead derived in the pre-processing operations.).
Claims 3, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Ghandi in view of Ahmed and Hajarnis, and further in view of Saldivar III et al. (US 2025/0061529).
With respect to claim 3, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 2, further comprising: receiving digital feedback from the PSN in response to the digital invitation (Ahmed, pa 0032, the blending module 150 gathers feedback on emerging, or recent documents, modifies one or more personalization weightings as more information about a user becomes available). Ghandi in view of Ahmed and Hajarnis doesn't expressly discuss fine tuning the first LLM based on the digital feedback.
Saldivar teaches fine tuning the first LLM based on the digital feedback (Saldivar, pa 0088, Other types of information may also be injected into the response, such as requests for reviews or assessments of the responses generated by the language model, requests for more information that is needed to complete a workflow (e.g., a number of questions for an assessment, what type of questions to use for an assessment, etc.), and/or the like. These requests may serve as a mechanism for gathering user feedback about the performance of the language model and/or supplying missing information to workflows. For example, the server platform 110 may use the feedback for fine-tuning the language model).
It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Ghandi in view of Ahmed and Hajarnis with the teachings of Saldivar because it enhances the performance and accuracy of the language model (Saldivar, pa 0088).
With respect to claims 14 and 20, the limitations are essentially the same as claims 3 and 4, and are rejected for the same reasons.
Claims 5, 8, 12, 16, 19, and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Ghandi in view of Ahmed and Hajarnis and further in view of Dolga et al. (US 2025/0123841).
With respect to claim 5, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 1, as discussed above.
Dolga teaches further comprising, for a document of the identified subset of digital documents, generating a skill-centric digital representation of the document by:
sending a first prompt including the document to the first LLM; receiving a document embedding from the first LLM in response to the first prompt (Dolga, pa 0106, definition of various terms provided by the glossary storage 501 may be utilized, and a sentence embedding may be extracted using a language model);
retrieving, from the store of skill embeddings pre-created by the first LLM, a second set of skill embeddings corresponding to the document embedding; and
aggregating the retrieved skill embeddings to create the skill-centric digital representation of the document (Dolga, pa 0106, Once the sentence embedding are extracted, clustering may be performed based on the sentence embedding for obtaining groups of related entities. These clusters may form an ontology, which may be a representative form of final list of skills, against which a skill matching 507 module compares skills within a given text ( e.g., select portions of processed or cleaned raw data) provided by the raw data acquisition and data cleaning 506 module. & pa 0112, such matching may be performed based on the generated ontology and using cosine similarity of the embeddings with all the representatives and terms in their clusters).
It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Ghandi in view of Ahmed and Hajarnis with the teachings of Dolga because it curates a list of desired skills (Dolga, pa 0106).
With respect to claim 8, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 7, as discussed above.
Dolga teaches further comprising: configuring the algorithmic matching based on a role of the user on the PSN (Dolga, pa 0114, the analytics 508 module may show that the SID2 developer may be the most experienced developer on the team, but has very limited evidenced experience in Skill B. Accordingly, even though the SID2 developer may be the most experienced developer in a group, the analytics 508 module may indicate that the respective developer is weak in Skill B, much so that a more junior SID3 developer having more experience in Skill B may be better suited for projects seeking Skill B.); in response to the role being a knowledge seeker, decreasing a matching threshold; and in response to the role being a knowledge contributor, increasing the matching threshold (Dolga, pa 0115, if a developer performs more than 75 tasks for Skill A, the respective developer may be deemed to be at a master level. If the developer performs more than 50 tasks but less than 75 tasks, the developer may be deemed to be at an intermediate level. In an example, the thresholds may be set differently for different skills. & pa 0116, the SIDS developer may be automatically assigned to various training due to lack of proficiency in many skills. Also, the SIDI developer may be recommended to apply to a higher position within the organization requiring expertise in Skill A.).
It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Ghandi in view of Ahmed and Hajarnis because it provides information of a level of proficiency to indicate how reliable documents are (Dolga, pa 0115).
With respect to claim 12, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 1, as discussed above.
Dolga teaches wherein identifying the set of skills comprises searching a digital taxonomy for canonical skill names that correspond to the key phrase (Dolga, pa 0106, Once the sentence embedding are extracted, clustering may be performed based on the sentence embedding for obtaining groups of related entities. These clusters may form an ontology).
It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified Ghandi in view of Ahmed and Hajarnis with the teachings of Dolga because it provides information of a level of proficiency to indicate how reliable documents are (Dolga, pa 0115).
With respect to claims 16 and 19, the limitations are essentially the same as claim 5, and are rejected for the same reasons.
With respect to claim 21, Ghandi in view of Ahmed and Hajarnis teaches the method of claim 1, as discussed above.
Dolga teaches further comprising: obtaining user activity data via a logging service; determining a current role based on the user activity data (Dolga, pa 0114, the analytics 508 module may show that the SID2 developer may be the most experienced developer on the team, but has very limited evidenced experience in Skill B. Accordingly, even though the SID2 developer may be the most experienced developer in a group, the analytics 508 module may indicate that the respective developer is weak in Skill B, much so that a more junior SID3 developer having more experience in Skill B may be better suited for projects seeking Skill B.);
retrieving the first set of skill embeddings from the store of skill embeddings based on the current role (Dolga, pa 0094, matching may be performed based on the generated ontology and using cosine similarity of the embeddings with all the representatives and terms in their clusters.).
It would have been obvious at the effective filing date of the invention to a person having ordinary skill in the art to which said subject matter pertains to have modified with the teachings of Dolga because it can identify a verified set of skills (Dolga, pa 0113).
Response to Arguments
35 U.S.C. 101
With regard to claims 1-20, the amendments to the claims have overcome the 35 U.S.C. 101 rejection. The Examiner withdraws the 35 U.S.C. 101 rejection to claims 1-20.
35 U.S.C. 103
Applicant seems to argue a newly amended limitation. Applicant’s amendment has rendered the previous rejection moot. Upon further consideration of the amendment, a new grounds of rejection is made in view of Ghandi et al. (US 2023/0394391), Ahmed et al. (US 2022/0245161), and Hajarnis et al. (US 2024/0330863).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRITTANY N ALLEN whose telephone number is (571)270-3566. The examiner can normally be reached M-F 9 am - 5:00 pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRITTANY N ALLEN/ Primary Examiner, Art Unit 2169