Prosecution Insights
Last updated: April 19, 2026
Application No. 18/520,863

Controlling the Use of Source Code for Training Artificial Intelligence (AI) Algorithms

Final Rejection §101§103
Filed
Nov 28, 2023
Examiner
GOORAY, MARK A
Art Unit
2199
Tech Center
2100 — Computer Architecture & Software
Assignee
Micro Focus LLC
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
305 granted / 400 resolved
+21.3% vs TC avg
Strong +63% interview lift
Without
With
+63.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
23 currently pending
Career history
423
Total Applications
across all art units

Statute-Specific Performance

§101
20.4%
-19.6% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
14.2%
-25.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 400 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to response filed on 11/20/2025. This action is FINAL. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 recites, “train a code generation Artificial Intelligence (AI) model…”, “…generate output source code…”, “identify one or more software licenses…” and “associate the identified one or more software licenses…”. The limitations of “train”, “generate”, “identify”, and “associate” as drafted are functions that, under their broadest reasonable interpretation, recite the abstract idea of a mental process. The limitations encompass a human mind carrying out the function through observation, evaluation, judgment and /or opinion, or even with the aid of pen and paper. Thus, this limitation recites and falls within the “Mental Processes” grouping of abstract ideas under Prong 1. Under Prong 2, this judicial exception is not integrated into a practical application. The claim recites the following additional element The additional elements of “retrieve, from a source code repositor stored in memory, input source code…” and “automatically store the output source code together with the identified one or more software licenses in a repository” are insignificant pre solution activities. They are recited at a high level of generality and thus is an insignificant extra-solution activity. See MPEP 2106.05(g). The limitation “for downstream compliance checking”, is intended use and does not hold any patentable weight. Further, the limitation “execute the trained code generation AI algorithm” and the system comprising a microprocessor and computer readable medium are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer, and/or mere computer components, MPEP 2106.05(f). Accordingly, the additional elements do not integrate the recited judicial exception into a practical application and the claim is therefore directed to the judicial exception. Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “executing the trained code generation AI” and the system comprising a microprocessor and computer readable medium amounts to no more than mere instructions, or generic computer/computer components to carry out the exception, and for the limitations of “retrieve input source code” and “store the output source code..”, the courts have identified receiving or transmitting data over a network and mere data gathering as well-understood, routine and conventional activity. Se MPEP 2106.05(d) and MPEP 2106.05(f). The recitation of generic computer instruction and computer components to apply the judicial exception, and the well-understood, routine, conventional activities do not amount to significantly more, thus, cannot provide an inventive concept. Accordingly, claim 1 is not patent eligible under 35 USC 101. Claim 2, claims “associate one or more attributions to the output source code”. The step of “associate” is an additional limitation of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents this limitation from being performed in the mind. Claim 3, claims “comparing snippets of the input source code…”, “comparing hashes of snippets…” and “identifying all licenses…” are additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. The limitations of “using a vector-based AI algorithm” is recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using generic computer, and/or mere computer components, MPEP 2106.05(f). The additional elements are neither a practical application under prong 2, nor an inventive concept under step 2B. Claims 4, claims “comparing the snippets of the source input source code” are additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. Claims 5, claims “identifying” and “comparing”. These limitations are additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. Claim 6, claims “identifying”. The limitation is additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. Claim 7, claims “wherein identifying one or more licenses…based on the AI algorithm”. The additional elements are neither a practical application under prong 2, nor an inventive concept under step 2B. Claim 8, claims, “wherein the identified plurality of software licenses… based on a likelihood threshold”. The additional elements are neither a practical application under prong 2, nor an inventive concept under step 2B. Claim 9, claims, “scan the one or more source code repositories to identify source code…” and “filter out the identified source code in the one or more source code repositories”. These limitations are additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. The claim further claims, “receive input…”. The courts have identified mere data gathering as well-understood, routine and conventional activity. See MPEP 2106.05(d) and MPEP 2106.05(f). Claim 10, claims “determine is there are any similarities…” and “refilter the input source code”. These limitations are additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. Claim 11, claims “scan the output source code to determine similarities”, “identify a degree of similarities” and “generate feedback”. These limitations are additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. Claim 12, claims “generating… an indications” and “receive, via the graphical user interface, user input”. The “generating… an indication”, step is recited at a high level of generality and thus is an insignificant extra-solution activity. See MPEP 2106.05(g). Regarding “receive input” step, The courts have identified mere data gathering is also well-understood, routine and conventional activity. See MPEP 2106.05(d) and MPEP 2106.05(f). Lastly, “filter output source code” step is an additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. The additional elements are neither a practical application under prong 2, nor an inventive concept under step 2B. Claim 13, claims “generate, for display…”, “receive”, and “associate”. The step of “generate, for display..” is recited at a high level of generality and thus is an insignificant extra-solution activity. See MPEP 2106.05(g). The step of “receive”, the courts have identified mere data gathering is also well-understood, routine and conventional activity. See MPEP 2106.05(d) and MPEP 2106.05(f). Lastly the step, “associate”, is an additional limitations of the abstract idea “Mental Process”. Nothing in the claimed limitations prevents the limitations from being performed in the mind. The additional elements are neither a practical application under prong 2, nor an inventive concept under step 2B. Claims 14, The additional elements are neither a practical application under prong 2, nor an inventive concept under step 2B. Claims 15-22, contain similar limitations to claims 1, 3, 4-6, 11 and 13 and are therefore rejected for similar reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-6, 9-10, 14-19 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Samudrala et al. (US 2024/0111843 A1) further in view of Pham (US 8,359,655 B1). As per claim 1, Samudrala et al. teaches the invention as claimed including, “A system, comprising: a microprocessor; and a computer readable medium, coupled with the microprocessor and comprising microprocessor readable and executable instructions that, when executed by the microprocessor, cause the microprocessor to: retrieve, from a source code repository stored in memory input source code, wherein the input source code is associated with one or more software licenses; train a code generation Artificial Intelligence (AI) model using the input source code;” Code suggestion may generate code suggestions based on text input in development environment. Code suggestion uses generative models, machine learning models such as generate pre-trained transformed (GPT), trained to generate code suggestions. Generative models are trained on a large corpus of data (input source code) for a specific task. Code repositories are used to train the generative model. The code may be subject to certain licenses (see 0055). Also see 0018 and 0021. “execute the trained code generation Al model to generate Code suggestions are generated based on programming language suggestion models configured to apply machine learning models to various source code files to determine code portions to be suggested (0018). The code suggestion service is configured to generate a set of candidate code suggestions for received code input based, at least in part, on the plurality of source code files. The code suggestion service may be configured to determine one or more code suggestions for received code input based, at least in part, on the respective licenses for respective one of the plurality of source code files. The suggestion satisfies the one or more licensing criteria (0021). Samudrala et al. does not explicitly appear to teach, “identify one or more software licenses associated with the output source code; associate the identified one or more software licenses with the output source code; and automatically store the output source code together with the identified one or more software licenses in a repository for downstream compliance checking.” Pham teaches, a software recognition engine is configured to analyze and classify or categorize software code and licenses associated therewith (column 2, lines 14-18). Server accepts software source code to be analyzed. A classification engine scans for instances of opensource code within a received software source code. One or more licenses are identified being associated with the input source code (column 3, lines 51-63). Submissions to the code datastore and license datastore may be made voluntarily by external code/license sources 118A…118n or by internal sources associated with the software recognition engine (column 5, lines 1-10). In order to verify the submissions from the code/license sources, the submissions may be initially stored in a third party/license datastore, that is not part of the production code datastore and/or license datastore (column 5, lines 24-30). Import engine may perform preprocessing of the submission of the code/license sources stored in the third party code/license datastore. The import engine may apply rules to the submission in the third part code/license datastore to verify the accuracy and completeness of the submission and commit the submission to the production code datastore and/or the license datastore (column 5, lines 46-56). Also see column 8, lines 30-37. The code datastore and the license datastore may be implements as one datastore (column 4, lines 28-30). Classification engine may identify one or more licenses and classification associated therewith using the code datastore and/or license datastore. The software recognition engine may recommend action or remediation plans based on the determined classification and/or compliance options with license requirements (column 4, lines 58-67). Also see column 7, lines 63 – column 8, lines 1-16). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Samudrala et al. with Pham et al. because both teach code containing licenses. Samudrala et al. teaches generating code suggestions that uses generative models, machine learning models such as generate pre-trained transformed (GPT), trained to generate code suggestions. Generative models are trained using source code. Code repositories are used to train the generative model. The code may be subject to certain licenses (0055). The code suggestion service may be configured to determine one or more code suggestions for received code input based, at least in part, on the respective licenses for respective one of the plurality of source code files. The suggestion satisfies the one or more licensing criteria (0021). Therefore the output source code of Samurrala et al. is source code satisfying one or more licensing criteria. Pham et al teaches a way to classify or categorize software code and licenses associated with them. This would allow Samudrala et al. to now check the generated source code to make sure it conforms to licenses and also save the code to a database. The Classification engine of Pham et al. identifies one or more licenses and classifications associated therewith using the code datastore and/or license datastore. The software recognition engine may recommend action or remediation plans based on the determined classification and/or compliance options with license requirements (column 4, lines 58-67). Therefore, applying the known technique of Pham et al. will allow Samudrala et al. to verify license compliance of source code generated from its machine learning model. As per claim 2, Pham further teaches, “The system of claim 1, wherein the microprocessor readable and executable instructions further cause the microprocessor to: associate one or more attributions to the output source code.” Pham teaches, a software recognition engine is configured to analyze and classify or categorize software code and licenses associated therewith (column 2, lines 14-18). Server accepts software source code to be analyzed. A classification engine scans for instances of opensource code within a received software source code. One or more licenses are identified being associated with the input source code. Results are made and sent to clients. Results includes an identified license being associated with received software code (column 3, lines 51- column 4, lines 1-4). Also see column 4, lines 15-30. As per claim 3 (Amended), Pham and Samudrala et al. further teach, “The system of claim 1, wherein the one or more licenses comprises a plurality of software licenses and wherein identifying the one or more licenses associated with the output source code is further based on at least one of the following: comparing snippets of the input source code to snippets of the output source code; comparing hashes of snippets of the input source code to hashes of snippets of the output source code; using a vector-based Al algorithm; and identifying all licenses associated with the input source code.” Pham teaches, a software recognition engine is configured to analyze and classify or categorize software code and licenses associated therewith (column 2, lines 14-18). Server accepts software source code to be analyzed. A classification engine scans for instances of opensource code within a received software source code. One or more licenses are identified being associated with the input source code (column 3, lines 51- column 4, lines 1-4). Source code is scanned and compared with source code stored in the code datastore. The classifier uses the results of the comparison to determine a license associated with the submitted source code and/or a class of the determine license (column 7, lines 52-62). Samudrala et al. teaches, license attribution service may receive an indication of source code repository to attribute licenses to source code files of source code repository. Source code parser may parse the source code files to determine whether portions of the source code files include indications of licenses that are attributable to the source code files. The parsed text may be sent to text comparator to be compared against license text from license database (0036). String matching may be based on similarity matching. Similarity scores indicated a probability that source files are likely to be of a particular licenses (0039). Based on similarity score, license attribution service may apply license attribution to attribute a particular license to respective source code file (0040). A hash may be used to match (0046). Source code license attribution may obtain source code files from source code repositories. Source code license attribution may parse the source code files to identity which licenses which are applicable to the respective source code files. Text of source code files are analyzed to determine the attributed license and populate a generated code license attribution database based on the attributed license for respective source code files (0033). As per claim 4 (Amended), Pham and Samudrala et al. further teach, “The system of claim 3, wherein identifying the one or more software licenses associated with the output source code is based on comparing the snippets of the input source code to the snippets of the output source code.” Pham teaches, a software recognition engine is configured to analyze and classify or categorize software code and licenses associated therewith (column 2, lines 14-18). Server accepts software source code to be analyzed. A classification engine scans for instances of opensource code within a received software source code. One or more licenses are identified being associated with the input source code(column 3, lines 51- column 4, lines 1-4). Source code is scanned and compared with source code stored int eh code datastore. The classifier uses the results of the comparison to determine a license associated with the submitted source code and/or a class of the determine license (column 7, lines 52-62). Samudrala et al. teaches license attribution service may receive an indication of source code repository to attribute licenses to source code files of source code repository. Source code parser may parse the source code files to determine whether portions of the source code files include indications of licenses that are attributable to the source code files. The parsed text may be sent to text comparator to be compared against license text from license database (0036). String matching may be based on similarity matching. Similarity scores indicated a probability that source files are likely to be of a particular licenses (0039). Based on similarity score, license attribution service may apply license attribution to attribute a particular license to respective source code file (0040). Source code license attribution may obtain source code files from source code repositories. Source code license attribution may parse the source code files to identity which licenses which are applicable to the respective source code files. Text of source code files are analyzed to determine the attributed license and populate a generated code license attribution database based on the attributed license for respective source code files (0033). As per claim 5 (Amended), Samudrala et al. further teaches, “The system of claim 3, wherein identifying the one or more software licenses associated with the output source code is based on the comparing of the hashes of snippets of the input source code to the hashes of snippets of the output source code.” License attribution service may receive an indication of source code repository to attribute licenses to source code files of source code repository. Source code parser may parse the source code files to determine whether portions of the source code files include indications of licenses that are attributable to the source code files. The parsed text may be sent to text comparator to be compared against license text from license database (0036). String matching may be based on similarity matching. Similarity scores indicated a probability that source files are likely to be of a particular licenses (0039). Based on similarity score, license attribution service may apply license attribution to attribute a particular license to respective source code file (0040). A hash may be used to match (0046). Source code license attribution may obtain source code files from source code repositories. Source code license attribution may parse the source code files to identity which licenses which are applicable to the respective source code files. Text of source code files are analyzed to determine the attributed license and populate a generated code license attribution database based on the attributed license for respective source code files (0033). As per claim 6 (Amended), Pham and Samudrala et al. further teach, “The system of claim 3, wherein identifying the one or more software licenses associated with the output source code is based on identifying all the software licenses associated with the input source code.” Pham teaches, a software recognition engine is configured to analyze and classify or categorize software code and licenses associated therewith (column 2, lines 14-18). Server accepts software source code to be analyzed. A classification engine scans for instances of opensource code within a received software source code. One or more licenses are identified being associated with the input source code(column 3, lines 51- column 4, lines 1-4). Source code is scanned and compared with source code stored int eh code datastore. The classifier uses the results of the comparison to determine a license associated with the submitted source code and/or a class of the determine license (column 7, lines 52-62). Samudrala et al. teaches license attribution service may receive an indication of source code repository to attribute licenses to source code files of source code repository. Source code parser may parse the source code files to determine whether portions of the source code files include indications of licenses that are attributable to the source code files. The parsed text may be sent to text comparator to be compared against license text from license database (0036). String matching may be based on similarity matching. Similarity scores indicated a probability that source files are likely to be of a particular licenses (0039). Based on similarity score, license attribution service may apply license attribution to attribute a particular license to respective source code file (0040). A hash may be used to match (0046). Source code license attribution may obtain source code files from source code repositories. Source code license attribution may parse the source code files to identity which licenses which are applicable to the respective source code files. Text of source code files are analyzed to determine the attributed license and populate a generated code license attribution database based on the attributed license for respective source code files (0033). As per claim 9 (Amended), Samudrala et al. further teaches, “The system of claim 1, wherein the microprocessor readable and executable instructions further cause the microprocessor to: receive input that identifies dentifying one or more source code repositories;” License attribution service may receive an indication of source code repository to attribute licenses to source code files of source code repository. Source code parser may parse the source code files to determine whether portions of the source code files include indications of licenses that are attributable to the source code files. The parsed text may be sent to text comparator to be compared against license text from license database (0036). “receive input dentifying one or more source code software license types to be filtered scan the one or more source code repositories to identify source code associated with the one or more specified software license type; and filter out the identified identified one or more source code repositories specified software license types Source code files may be filtered according to one or more licensing criteria (0019). The database may indicate that the particular software license has various criteria that may be used to filter or restrict the particular source code file from being included as a code suggestion candidate for developers that choose to restrict based on the licensing criteria (0020). The code suggestion service receives a request that specifies one or more licensing criteria. The code suggestion service is configured to determine respective licenses for respective one of a plurality of source code files according to a source code attribution database that comprises indications of the respective licenses identified from parsing the plurality of source code files (0021). The developer of the code file input may modify requests to indicate that certain license types or characteristics are to be included or excluded as part of code suggestion generation (0028). Licensing criteria filtering may filter candidate code suggestion by applying the licensing criteria from the request. Code suggestion is determined based on source code and license information provided by the source code license attribution (0030). The licensing criteria filtering may be used to train the programming language prediction models improve subsequent results for candidate code suggestions. The list of candidate code suggestion may exclude licenses that do not satisfy the licensing criteria (0034). Predictions may provide license filtering, which may filter or limit prediction based on licensing criteria provided by the client (0076). Licensing filtering may be used to train the programming language prediction models to improve subsequent results for candidate code suggestions (0076). As per claim 10 (Amended), Samudrala et al. further teaches, “The system of claim 9 wherein the microprocessor readable and executable instructions further cause the microprocessor to: determine whether portions of the are similar to portions of in response to determining such similarities remove the portions of the input source code that corresponds to the similar portions of the filtered-out source code Source code license attribution may obtain source code files from source code repositories. Source code license attribution may parse the source code files to identity which licenses which are applicable to the respective source code files. Text of source code files are analyzed to determine the attributed license and populate a generated code license attribution database based on the attributed license for respective source code files (0033). License attribution service may receive an indication of source code repository to attribute licenses to source code files of source code repository. Source code parser may parse the source code files to determine whether portions of the source code files include indications of licenses that are attributable to the source code files. The parsed text may be sent to text comparator to be compared against license text from license database (0036). String matching may be based on similarity matching. Similarity scores indicated a probability that source files are likely to be of a particular license (0039). Based on similarity score, license attribution service may apply license attribution to attribute a particular license to respective source code file (0040). A hash may be used to match (0046). Source code files may be filtered according to one or more licensing criteria (0019). The database may indicate that the particular software license has various criteria that may be used to filter or restrict the particular source code file from being included as a code suggestion candidate for developers that choose to restrict based on the licensing criteria (0020). The code suggestion service receives a request that specifies one or more licensing criteria. The code suggestion service is configured to determine respective licenses for respective one of a plurality of source code files according to a source code attribution database that comprises indications of the respective licenses identified from parsing the plurality of source code files (0021). The developer of the code file input may modify requests to indicate that certain license types or characteristics are to be included or excluded as part of code suggestion generation (0028). Licensing criteria filtering may filter candidate code suggestion by applying the licensing criteria from the request. Code suggestion is determined based on source code and license information provided by the source code license attribution (0030). The licensing criteria filtering may be used to train the programming language prediction models improve subsequent results for candidate code suggestions. The list of candidate code suggestion may exclude licenses that do not satisfy the licensing criteria (0034). Predictions may provide license filtering, which may filter or limit prediction based on licensing criteria provided by the client (0076). Licensing filtering may be used to train the programming language prediction models to improve subsequent results for candidate code suggestions (0076). As stated above licenses can be determined or matched based on similarity, therefore It would have been obvious to one of ordinary skill in the art before the effective filing date for them to also be filtered based on similarity to determine a match for filtering. This will allow the system to detect more licenses and filter the necessary ones and would have been obvious to try. As per claim 14 (Amended), Pham and Samudrala et al. further teaches, “The system of claim 1, wherein identifying the one or more software licenses associated with the output source code comprises identifying a specific one of the one or more software licenses from multiple components of the input source code.” Pham teaches, a software recognition engine is configured to analyze and classify or categorize software code and licenses associated therewith (column 2, lines 14-18). Server accepts software source code to be analyzed. A classification engine scans for instances of opensource code within a received software source code. One or more licenses are identified being associated with the input source code (column 3, lines 51- column 4, lines 1-4). Source code is scanned and compared with source code stored in the code datastore. The classifier uses the results of the comparison to determine a license associated with the submitted source code and/or a class of the determine license (column 7, lines 52-62). Samudrala et al. teaches, license attribution service may receive an indication of source code repository to attribute licenses to source code files of source code repository. Source code parser may parse the source code files to determine whether portions of the source code files include indications of licenses that are attributable to the source code files. The parsed text may be sent to text comparator to be compared against license text from license database (0036). String matching may be based on similarity matching. Similarity scores indicated a probability that source files are likely to be of a particular licenses (0039). Based on similarity score, license attribution service may apply license attribution to attribute a particular license to respective source code file (0040). Source code license attribution may obtain source code files from source code repositories. Source code license attribution may parse the source code files to identity which licenses which are applicable to the respective source code files. Text of source code files are analyzed to determine the attributed license and populate a generated code license attribution database based on the attributed license for respective source code files (0033). As per claim 15-19 and 22, they contain similar limitations to claims 1 and 3-4 and are therefore rejected for the same reasons. Claims 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Samudrala et al. (US 2024/0111843 A1) and Pham (US 8,359,655 B1) as applied to claim 1 above, and further in view of Kadu et al. (US 11,954,485 B1). As per claim 7 (Amended), Samudrala et al. and Pham do not explicitly appear to teach, “The system of claim 3, wherein identifying the one or more software licenses associated with the output source code is based on the vector-based Al model.” Kadu et al. teaches scanning a source code file to identify text lines that are processed by a classifier to determine licenses (column 1, lines 35-49). Also see column 4, lines 22-43. A neural network classifier is used (column 3, lines 30-50). Embeddings generate vectors for the classifier (column 10, lines 7-62). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Samudrala et al. and Pham with Kadu et al. because all teach analyzing source code to determine license information of the source code using comparison or classification. Using a comparison/matching algorithm is similar to using a machine learning algorithm to perform classification. It is nothing more than a design choice and produces similar results. As per claim 8 (Amended), Samudrala et al. further teaches, “The system of claim 7, wherein the identified plurality of software licenses are associated with the output source code based on a likelihood threshold.” Paragraph 0039. Claims 12 are rejected under 35 U.S.C. 103 as being unpatentable over Samudrala et al. (US 2024/0111843 A1) and Pham (US 8,359,655 B1) as applied to claim 1 above, and further in view of Weigert et al. (US 2010/0241469 A1). As per claim 12 (Amended), Samudrala et al. further teaches, “The system of claim 1, wherein the one or more software licenses comprise a plurality of software licenses, wherein at least two of the plurality of software licenses are incompatible, and wherein the microprocessor readable and executable instructions further cause the microprocessor to: generate, for display in a graphical user interface, an indication that identifies the incompatible software licenses receive, via the graphical user interface, user input selecting one of software license to filter; and filter out source code associated with the selected incompatible software license.” The code suggestion service receives a request that specifies one or more licensing criteria. The code suggestion service is configured to determine respective licenses for respective one of a plurality of source code files according to a source code attribution database that comprises indications of the respective licenses identified from parsing the plurality of source code files (0021). The developer of the code file input may modify requests to indicate that certain license types or characteristics are to be included or excluded as part of code suggestion generation (0028). Licensing criteria filtering may filter candidate code suggestion by applying the licensing criteria from the request. Code suggestion is determined based on source code and license information provided by the source code license attribution (0030). The licensing criteria filtering may be used to train the programming language prediction models improve subsequent results for candidate code suggestions. The list of candidate code suggestion may exclude licenses that do not satisfy the licensing criteria (0034). Predictions may provide license filtering, which may filter or limit prediction based on licensing criteria provided by the client (0076). Licensing filtering may be used to train the programming language prediction models to improve subsequent results for candidate code suggestions (0076). However, Samudrala et al. does not explicitly appear to teach, determining incompatible licenses and displaying them. Weigert et al. teaches any new packages may be initially associated with a candidate status, wherein candidate packages may initially be blocked from distribution (filtered). If the candidate package is determined to be in compliance with open source code licenses, export regulations, and other requirements, the status of the package may be changed to production status to permit distribution (0012). Compliance reports includes an indication of whether the software component includes multiple licenses or inter-package relationships, in which case the compliance officer or other authorized reviewer may be required to resolve the relationships among the multiple licenses or packages. The report identifies license declaration issues (0023). Also see 0032. It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Samudrala et al. with Weigert et al. Samudrala et al. teaches a developer can set criteria to exclude/filter certain license types. Weigert et al. teaches determining license issues and reporting them to the developer. This will allow a developer in Samdrala et al. to know what type of licenses to filter and would have been obvious to try. Claims 13 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Samudrala et al. (US 2024/0111843 A1) and Pham (US 8,359,655 B1) as applied to claims 1 and 15 above, and further in view of Krawetz (US 8,108,315 B2). As per claim 13 (Amened), Samudrala et al. further teaches, “The system of claim 1, wherein the microprocessor readable and executable instructions further cause the microprocessor to: generate, for display in a graphical user interface, a listing indicating percentages representing similarity matches between software licenses and ; receive, via the graphical user interface, a user-specified similarity associate the one or more software licenses with the output source code based on whether the similarity match satisfy the similarity Samudrala et al. teaches string matching may be based on similarity matching. Sting matching may generate similarity scores indicating a probability that the N-grams for respective input text of the source code files are likely to be of a particular license. A similarity score of .95 or greater may indicate a high probability that the input text represents the particular license (0039). The developer may request that the presented code suggestions exclude specified characteristics with respect to their respective software licenses (0026). The developer of the code file input may modify requests to indicate that certain license types or characteristics are to be included or excluded as part of code suggestion generation (0028). However, Samudrala et al. does not explicitly appear to teach a graphical using interface to “display in a graphical user interface, a listing of percentages of a match that the one or more licenses are generated from the input source code”. Krawetz et al. teaches matching based on similarity and a threshold. Krawetz further teaches, results are logged and saved and that results are provided to a user as output to a display. This includes the name of the target and known functions, percent match, exact tokens that did and/or did not match (column 4, lines 25-43). A threshold of 80% indicates that a sequence is reused and this corresponds to a known or pre-existing function subject to oen or more licenses (column 7, lines 1-4). It would have been obvious to one of ordinary skill in the art before the effective filing date to modify Samudrala et al. with Krawetz et al. because both teach determine whether software files are associated with licenses. Samudrala et al. teaches matching using a similarity score and threshold in order to determine a license type in the code. Krawetz also teaches the use of a similarity score and threshold. Krawetz also teaches, displaying these results. This would allow a developer in Samudrala et al. to visualize licenses in code and would have been obvious to try. As per claim 21, claim 21 contains similar limitations to claim 13 and is therefore rejected for the same reason. Response to Arguments Applicant's arguments filed 11/20/2025 have been fully considered but they are not persuasive. Regarding 35 U.S.C 101 rejection, applicant states that amendments were made to overcome the current rejection. The examiner disagrees. Please see above rejection. Regarding 35 U.S.C 112(b) rejections. These rejections have been removed due to amendments. Regarding 35 U.S.C 103 rejection: Applicant argues that “Nowhere does Samudrala teach or suggest that the system retrieves source code for the purpose of using it as training data for a code generation AI model nor that such retrieved code is necessarily subject to one or more software licenses as recited in claim 1. Samudrala’s parsing and attribution operations concern identifying license test in already existing files, not retrieving software licensed input code for AI model training. The Office Action’s reasoning that it would have been obvious to analyze predicted code for licenses does not address how or why input code subject to license would be retrieved and used for training the model in the first place. The step of “retrieve, from a source code repository stored in memory, input source code, wherein the input source code is associated with one or more software licenses” introduces a specific relationship between training data (licensed code) and the training AI model that is not taught or suggested by Samudrala’s database population or license scanning process. Accordingly, Samudrala does not teach or suggest this feature.”. The examiner respectfully disagrees. As shown in the above rejection, “Samudrala’s teaches, code suggestion may generate code suggestions based on text input in development environment. Code suggestion uses generative models, machine learning models such as generate pre-trained transformed (GPT), trained to generate code suggestions. Generative models are trained on a large corpus of data (input source code) for a specific task. Code repositories are used to train the generative model. The code may be subject to certain licenses (0055). Also see 0018 and 0021.”. This teaches that a trained machine learning model which is equivalent to an AI model is trained to generate code suggestions. The above paragraph also states that the machine leaning model is trained from code repositories and that the code can be subjected to certain licenses. Therefore, a repository containing source code subjected to license is used to train the machine learning model. It would be inherent for the code in the repository to be retrieved from the repository in order to train the machine learning model. The machine learning model would need to receive the data in order to be trained. Therefore, the above cited references to Samudrala teach the claimed limitations. The reset of the applicants’ arguments are directed to newly amended limitations and are therefore moot. Please see the above rejection of Samudrala et al. further in view of Pham. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK A GOORAY whose telephone number is (571)270-7805. The examiner can normally be reached Monday - Friday 10:00am - 6:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lewis Bullock can be reached at 571-272-3759. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARK A GOORAY/ Examiner, Art Unit 2199 /LEWIS A BULLOCK JR/ Supervisory Patent Examiner, Art Unit 2199
Read full office action

Prosecution Timeline

Nov 28, 2023
Application Filed
Aug 09, 2025
Non-Final Rejection — §101, §103
Sep 18, 2025
Applicant Interview (Telephonic)
Sep 19, 2025
Examiner Interview Summary
Nov 20, 2025
Response Filed
Feb 19, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596627
AGENTLESS SYSTEM AND METHOD FOR DISCOVERING AND INSPECTING APPLICATIONS AND SERVICES IN COMPUTE ENVIRONMENTS
2y 5m to grant Granted Apr 07, 2026
Patent 12572444
COMPATIBILITY CHECK FOR CONTINUOUS GLUCOSE MONITORING APPLICATION
2y 5m to grant Granted Mar 10, 2026
Patent 12566587
REAL-TIME COMPUTING RESOURCE DEPLOYMENT AND INTEGRATION
2y 5m to grant Granted Mar 03, 2026
Patent 12535995
COMPUTER CODE GENERATION FROM TASK DESCRIPTIONS USING NEURAL NETWORKS
2y 5m to grant Granted Jan 27, 2026
Patent 12536091
PROGRAM ANALYSIS APPARATUS, PROGRAM ANALYSIS METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+63.3%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 400 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month