Prosecution Insights
Last updated: April 19, 2026
Application No. 18/974,948

METHOD, SYSTEM, AND PROGRAM FOR SEARCHING SIMILAR CONTENT BASED ON LARGE LANGUAGE MODELS

Final Rejection §103
Filed
Dec 10, 2024
Examiner
MAMILLAPALLI, PAVAN
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Cj Olivenetworks Co. Ltd.
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
98%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
597 granted / 743 resolved
+25.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
21 currently pending
Career history
764
Total Applications
across all art units

Statute-Specific Performance

§101
24.1%
-15.9% vs TC avg
§103
51.7%
+11.7% vs TC avg
§102
8.0%
-32.0% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 743 resolved cases

Office Action

§103
DETAILED ACTION This Office Action in response to Amendments and Arguments submitted on October 29, 2025 for Application # 18/974,948 filed on December 10, 2024 in which claims 1-18 are presented for examination. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. KR10-2024-0085659, filed on 06/28/2024. Status of claims Claims 1-18 are pending, of which claims 1-18 are rejected under 35 U.S.C. 103. Claims 1, 2, 10, 11, 12, 13, 15, 16, 17 and 18 are amended. No claims are canceled. No claims are newly added. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-18 are rejected under 35 U.S.C. 103 as being unpatentable over Allen O’Neill US 2024/0256592 A1 (hereinafter ‘O’Neill’) in view of Bex, IV et al. US 2024/0403563 A1 (hereinafter ‘Bex’) as applied, and further in view of Richard Walsh US 2019/0205373 A1 (hereinafter ‘Walsh’). As per claim 1, O’Neill disclose, A method for searching for similar content (O’Neill: paragraph 0106: disclose quantifies the similarity between two pieces, portions, or segments of media content) using a system comprising a CPU and memory (O’Neill: paragraph 0650: disclose system operates on CPU of the computing device), comprising the steps of: (a) a script data preprocessing step for preprocessing script data (O’Neill: paragraph 0157: disclose extraction engine to isolate specific elements from the media, such as text from videos ‘script data’, which examiner equates to preprocessing) for a plurality of drama contents (O’Neill: paragraph 0015: disclose determine attributes within the audio component of the selected segment may include aggregating the media content details. Examiner equates content attributes and details to drama contents as argument is drama is a type of content), each drama a content comprising a plurality of episodes (O’Neill: paragraph 0200: disclose media content segments include an action sequence in a film in which a timestamp attribute marks its occurrence in the film and a scene attribute. Examiner equates content segments to plurality of episodes and also examiner would discuss episodes in view of secondary art below); (d) a similarity score determination step in which the content-level features of a target content from among plurality of drama contents (O’Neill: paragraph 0015: disclose determine attributes within the audio component of the selected segment may include aggregating the media content details. Examiner equates content attributes and details to drama contents as argument is drama is a type of content) are compared with the content-level features of other content from among the plurality of drama contents and similarity score is determined (O’Neill: paragraph 0106: disclose relative metric ‘score’ that quantifies the similarity between two pieces ‘target and other’, portions, or segments of media content). It is noted, however, O’Neill did not specifically detail the aspects of (e) a step in which a similar content that is similar to the target content is selected if the similarity score meets predetermined criteria as recited in claim 1. On the other hand, Bex achieved the aforementioned limitations by providing mechanisms of (e) a step in which a similar content that is similar to the target content is selected if the similarity score meets predetermined criteria (Bex: paragraph 0027: disclose assessment scale generator generates the similarity scores in part based on the received user-defined ‘predetermine’ criteria). The motivation for doing so would have been to semantic comparison tools for evaluating similarity and equivalency between documents and text produced by different institutions or different fields of knowledge (Bex: paragraph 0002). It is noted, however, neither O’Neill nor Bex specifically detail the aspects of (b) an episode-level featurization step for extracting episode-level features from the preprocessed script data for each individual episode of the plurality of episodes; (c) a content-level featurization step for generating content-level features for each content of the plurality of drama contents by synthesizing the episode-level features extracted during the episode-level featurization step as recited in claim 1. On the other hand, Walsh achieved the aforementioned limitations by providing mechanisms of (b) an episode-level featurization step for extracting episode-level features (Walsh: paragraph 0032: disclose signatures ‘features’ may be generated ‘extracted’ from language-level metrics that identify categories, groups, or common characteristics ‘features’ of content) from the preprocessed script data (Walsh: paragraph 0032: disclose caption data ‘script data’ associated with content ‘episode’) for each individual episode of the plurality of episodes (Walsh: paragraph 0032: disclose an episode ‘individual episode’ of a particular television series ‘plurality of episodes’ may have characteristics such as words, phrases, complexity level, or any other characteristic that are very similar to another episode of the same television series); (c) a content-level featurization step for generating content-level features for each content of the plurality of drama (Walsh: paragraph 0033: disclose a signature may include a value for each of any number of characteristics of content, from 1 to 5, 10, 20, 100, 1,000, 10,000, or more characteristics ‘features’. Characteristics of content may include words, phrases, complexity level, reading level, profanity, caption metadata signals, speaker changes, and any other indicator of information about the content. Content of a certain medical drama) contents by synthesizing (Walsh: paragraph 0034: disclose content may be analyzed ‘synthesizing’ and result in the generation of a signature having values for certain characteristics within a particular range (e.g., 90% of its characteristics within +/−20%, 85% of its characteristics within +/−15%, 80% of its characteristics within +/−10%, etc.) of another signature, it may be readily identified within a category, sub-category, or related sub-category of content having similar signatures) the episode-level features extracted during the episode-level featurization step (Walsh: paragraph 0032: disclose signatures may be generated from language-level metrics that identify categories, groups, or common characteristics of content). The motivation for doing so would have been to provide language-level content recommendations to users based on an analysis of closed captions of content viewed by the users and other data (Walsh: Abstract). O’Neill, Walsh and Bex are analogous art because they are from the “same field of endeavor” and both from the same “problem-solving area”. Namely, they are both from the field of “Content Management Systems”. It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to combine the systems of O’Neill, Walsh and Bex because they are both directed to content management systems and both are from the same field of endeavor. The skilled person would therefore regard it as a normal option to include the restriction features of Bex and Walsh with the method described by O’Neill in order to solve the problem posed. Therefore, it would have been obvious to combine Bex and Walsh with O’Neill to obtain the invention as specified in instant claim 1. As per claim 2, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, O’Neill disclose, wherein the features extracted in the episode-level featurization step are generated as vectors having multiple dimensions, wherein the vectors are episode-level feature vectors (O’Neill: paragraph 0014: disclose high-dimensional vectors that provide a representation of sentiment across the media content). As per claim 3, most of the limitations of this claim have been noted in the rejection of claims 1 and 2 above. In addition, O’Neill disclose, wherein the step of generating the episode-level feature vectors comprises a step of embedding the script data (O’Neill: paragraph 0014: disclose selecting a trend analysis embedding graph in which embedding layers convert trending topics or products into vectors) using a pretrained large language model (PLLM) (O’Neill: paragraph 0081: disclose using large language models (LLM) and other large generative AI model). As per claim 4, most of the limitations of this claim have been noted in the rejection of claims 1, 2 and 3 above. In addition, O’Neill disclose, wherein the step of embedding the script data using the PLLM comprises: a step of dividing the script data into multiple chunks; a step of generating vectors corresponding to the multiple chunks using the PLLM; a step of generating an episode-level feature vector using the vectors corresponding to the generated multiple chunks (O’Neill: paragraph 0265: disclose use large language models (LLMs) for context understanding, sentiment analysis, transcript generation, and topic detection from speech). As per claim 5, most of the limitations of this claim have been noted in the rejection of claims 1, 2, 3 and 4 above. In addition, O’Neill disclose, wherein the step of generating episode-level feature vectors using the vectors corresponding to the multiple chunks comprises generating episode-level feature vectors using a weighted average method, wherein the vectors corresponding to the plurality of chunks are weighted according to the length of each chunk (O’Neill: paragraph 0077: disclose In convolutional neural networks, the weighted sum for each output activation is computed based on a batch of inputs, and the same matrices of weights (called “filters”) are applied to every output). As per claim 6, most of the limitations of this claim have been noted in the rejection of claims 1, 2, 3 and 4 above. In addition, O’Neill disclose, wherein the step of generating episode-level feature vectors comprises generating episode-level feature vectors by sequentially concatenating the vectors corresponding to each chunk (O’Neill: paragraph 0080: disclose weigh different parts of an input sequence when producing an output sequence). As per claim 7, most of the limitations of this claim have been noted in the rejection of claims 1, 2 and 3 above. In addition, O’Neill disclose, wherein the step of embedding the script data using the PLLM comprises embedding the script data at any internal layer that is not the final layer of the PLLM (O’Neill: paragraph 0265: disclose use large language models (LLMs) for context understanding, sentiment analysis, transcript generation, and topic detection from speech). As per claim 8, most of the limitations of this claim have been noted in the rejection of claims 1, 2 and 3 above. It is noted, however, O’Neill did not specifically detail the aspects of wherein the step of generating episode-level feature vectors further comprises generating episode-level feature vectors by embedding the script data using a Doc2Vec model as recited in claim 8. On the other hand, Bex achieved the aforementioned limitations by providing mechanisms of wherein the step of generating episode-level feature vectors further comprises generating episode-level feature vectors by embedding the script data using a Doc2Vec model (Bex: paragraph 0050: disclose document embedding algorithms (e.g., Doc2Vec)). As per claim 9, most of the limitations of this claim have been noted in the rejection of claims 1, 2, 3 and 8 above. It is noted, however, O’Neill did not specifically detail the aspects of wherein the step of generating episode-level feature vectors further comprises generating episode-level feature vectors by embedding the script data using TF-IDF (Term Frequency-Inverse Document Frequency) as recited in claim 9. On the other hand, Bex achieved the aforementioned limitations by providing mechanisms of wherein the step of generating episode-level feature vectors further comprises generating episode-level feature vectors by embedding the script data using TF-IDF (Term Frequency-Inverse Document Frequency) (Bex: paragraph 0055: disclose machine learning module uses cosine similarity on term frequency-inverse document frequency (TF-IDF) vectors). As per claim 10, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, O’Neill disclose, wherein the content-level featurization step is characterized in that it generates a single vector for the target content based on the features extracted in the episode-level featurization step, with the single vector being referred to as the content-level feature vector (O’Neill: paragraph 0087: disclose vectors may be refined during the model's training phase to encapsulate the characteristics or attributes of the tokens). As per claim 11, most of the limitations of this claim have been noted in the rejection of claims 1 and 10 above. In addition, O’Neill disclose, wherein the content-level featurization step is characterized in that it generates a content-level feature vector by averaging the episode-level feature vectors generated in the episode-level featurization step (O’Neill: paragraph 0087: disclose vectors may be refined during the model's training phase to encapsulate the characteristics or attributes of the tokens). As per claim 12, most of the limitations of this claim have been noted in the rejection of claims 1 and 10 above. In addition, O’Neill disclose, wherein the content-level featurization step is characterized in that it generates a content-level feature vector by sequentially concatenating the episode-level feature vectors generated in the episode-level featurization step (O’Neill: paragraph 0087: disclose vectors may be refined during the model's training phase to encapsulate the characteristics or attributes of the tokens). As per claim 13, most of the limitations of this claim have been noted in the rejection of claim 1 above. It is noted, however, O’Neill did not specifically detail the aspects of wherein the step of determining the similarity score comprises a step of calculating the cosine similarity between the content-level feature vector of the target content and the content-level feature vector of other content to generate a first similarity score as recited in claim 13. On the other hand, Bex achieved the aforementioned limitations by providing mechanisms of wherein the step of determining the similarity score comprises a step of calculating the cosine similarity between the content-level feature vector of the target content and the content-level feature vector of other content to generate a first similarity score (Bex: paragraph 0055: disclose the similarity score is able to be calculated based on the cosine of the angle between two document vectors). As per claim 14, most of the limitations of this claim have been noted in the rejection of claims 1 and 13 above. In addition, O’Neill disclose, wherein the step of determining the similarity score further comprises: a step of normalizing the first similarity score; and a step of applying weights to the normalized first similarity score to generate a second similarity score (O’Neill: paragraph 0106: disclose compute a contextual distance value using a weighted combination of various factors. Examiner argues that the weighted is adjusting the score can be applied to this limitation). As per claim 15, most of the limitations of this claim have been noted in the rejection of claim 1 above. In addition, O’Neill disclose, a step of generating a script database that stores multiple script data prior to step (a), and wherein the other content in step (d) is stored in the script database (O’Neill: paragraph 0157: disclose extraction and analyze the media content collected by the media ingestion engine and add detailed data to the media content knowledge repository, where repository is equated to database). As per claim 16, O’Neill disclose, A method for searching for similar content (O’Neill: paragraph 0106: disclose quantifies the similarity between two pieces, portions, or segments of media content) using a system comprising a CPU and memory (O’Neill: paragraph 0650: disclose system operates on CPU of the computing device), comprising the steps of: (a1) a script data receiving step for receiving script data for specific target content (O’Neill: paragraph 0139: disclose receive or retrieve relevant text and audiovisual content (e.g., influencer video), analyze the content to identify the primary topic ‘target’): remaining limitations in this claim 16 are similar to claim 1. Therefore, the remaining limitations in this claim 16 are rejected under the same rationale as claim 1. As per claim 17, O’Neill disclose, A similar content search system (O’Neill: paragraph 0106: disclose quantifies the similarity between two pieces, portions, or segments of media content) comprising a CPU and memory, wherein the CPU executes commands stored in the memory to implement the similar content search method (O’Neill: paragraph 0650: disclose system operates on CPU of the computing device), the method comprising: remaining limitations in this claim 17 are similar to claim 1. Therefore, the remaining limitations in this claim 17 are rejected under the same rationale as claim 1. As per claim 18, O’Neill disclose, A similar content search system (O’Neill: paragraph 0106: disclose quantifies the similarity between two pieces, portions, or segments of media content) comprising a CPU and memory, wherein the CPU executes commands stored in the memory to implement the similar content search method (O’Neill: paragraph 0650: disclose system operates on CPU of the computing device), the method comprising: remaining limitations in this claim 18 are similar to claim 16. Therefore, the remaining limitations in this claim 18 are rejected under the same rationale as claim 16. Response to Arguments Applicant’s arguments with respect to claims 1-18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Examiner rejected amended and argued limitation in independent claims 1, 16, 17 and 18 with new prior-art reference Richard Walsh US 2019/0205373 A1. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US Pub. US 2013/0124984 A1 disclose “Method and Apparatus for Providing Script Data” US Pub. US 2025/0131934 A1 disclose “AUDIO GENERATION SYSTEM AND METHOD” Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAVAN MAMILLAPALLI whose telephone number is (571)270-3836. The examiner can normally be reached on M-F. 8am - 4pm, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann J Lo can be reached on (571) 272-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAVAN MAMILLAPALLI/ Primary Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Dec 10, 2024
Application Filed
Jul 26, 2025
Non-Final Rejection — §103
Oct 29, 2025
Response Filed
Feb 05, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602389
RECOMMENDATION WORD DETERMINATION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12603155
METHODS FOR COMPRESSION OF MOLECULAR TAGGED NUCLEIC ACID SEQUENCE DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12601597
GENERATING, FROM DATA OF FIRST LOCATION ON SURFACE, DATA FOR ALTERNATE BUT EQUIVALENT SECOND LOCATION ON THE SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12602503
GENERATING, FROM DATA OF FIRST LOCATION ON SURFACE, DATA FOR ALTERNATE BUT EQUIVALENT SECOND LOCATION ON THE SURFACE
2y 5m to grant Granted Apr 14, 2026
Patent 12591580
CONFIDENCE FABRIC ENHANCED PRIVACY-PRESERVING DATA AGGREGATION
2y 5m to grant Granted Mar 31, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
98%
With Interview (+17.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 743 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month