Prosecution Insights
Last updated: April 19, 2026
Application No. 18/359,643

FEATURE MANAGEMENT

Non-Final OA §101§103
Filed
Jul 26, 2023
Examiner
SHALU, ZELALEM W
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Lemon Inc.
OA Round
1 (Non-Final)
29%
Grant Probability
At Risk
1-2
OA Rounds
3y 2m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
31 granted / 108 resolved
-26.3% vs TC avg
Strong +19% interview lift
Without
With
+19.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
63.4%
+23.4% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This action is in response to the Application filed on 07/26/2023 . Claims 1-20 are pending in the case. All claims are examined and rejected accordingly. Information Disclosure Statement As required by MPEP 609 (c), the Applicants’ submission of the Information Disclosure Statement(s) filed on 11/06/2026 and 12/12/2024 are acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or co mposition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed towards an abstract idea, without significantly more. Step 1 According to the first part of the analysis, in the instant case, claim is directed to a computer implemented method, which is a process and falls within one of the four statutory categories (i.e., process, machine, manufacture, or composition of matter). Regarding independent Claim 1 , 12 and 20 , At step 2A, prong 1, Does the claim recite a judicial exception? Claim 1 further recites the steps of: obtaining a first event associated with a first and a second object, and obtaining a second event associated with the first and second events, a type of the first event being different from a type of the second event . ( This step relies on collection of information or data gathering step which is data organization grouping of abstract ideas.); determining a first feature of the first object based on a first encoder, and determining a second feature of the second object based on a second encoder ( This step relies mathematical feature extraction (encoding) which is in the mathematical concepts of abstract ideas.) ; and updating the first encoder based on the first and second features and the first and second events ( This step relies mathematical model training which is in the mathematical concepts of abstract ideas.) ; Step 2A prong 2 : Does the claim recite additional elements? Do those additional elements, individually and in combination, integrate the judicial exception into a practical application? Further, the claim does not recite any additional element which could integrate this abstract idea into a practical application, because the additional elements recited of consist of: An electronic device, comprising a computer processor coupled to a computer-readable memory unit, the memory unit comprising instructions that when executed by the computer processor implements a method for feature management (Claim 1 2 ) and non-transitory computer program product, the non-transitory computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by an electronic device to cause the electronic device to perform a method for feature management, …” ( C laim 20 ) ( Generic computer components on which to implement the math abstract idea (see MPEP 2106.05(f)); “… … first encoder and second encoder … (generic ML components performing mathematical transformations) … updating encoder … (conventional ML training step that do not recite no specific ML model improvement) The additional elements are recited at a high level of generality and do not amount to significantly more than the abstract idea (MPEP 2106.05(f)). The claim use a computer to perform a math and does not improve the function of the computer or other technology. Accordingly, the claim does not integrate the abstract idea into practical application. Thus, the claim is directed towards the abstract idea. Step 2B: Do the additional elements, considered individually and in combination, amount to significantly more than the judicial exception? Further, the claim does not recite any additional element which could integrate this abstract idea into a practical application, because the additional elements recited of consist of: An electronic device, comprising a computer processor coupled to a computer-readable memory unit, the memory unit comprising instructions that when executed by the computer processor implements a method for feature management (Claim 1 2 ) and non-transitory computer program product, the non-transitory computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by an electronic device to cause the electronic device to perform a method for feature management, …” ( C laim 20 ) ( Generic computer components on which to implement the math abstract idea (see MPEP 2106.05(f)); “… … first encoder and second encoder … (generic ML components performing mathematical transformations) … updating encoder … (conventional ML training step that do not recite no specific ML model improvement) The additional elements, alone and in combination, fail to integrate the abstract idea into a practical application or add “significantly more.” Thus, the claims are not patent eligible. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Neither can insignificant extra-solution activity. All of these additional elements as generically claimed are thus considered well-understood, routine, and conventional. Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, these independent claims are not patent eligible. The dependent claims respectively recite a judicial exception in limitations of: “ Wherein updating the first encoder comprises: determining a second loss between the second event and a prediction of the second event that is determined based on the first and second features; and updating the first encoder based on the second loss. ” ( claim 2 /1 3 ), “ generating a combination feature based on the first and second features; determining the prediction of the second event based on the combination feature and a second decoder describing an association between a reference feature that is related to a first and a second reference object, and a second reference event that is associated with the first and second reference objects, the first reference object having the same type as the first object, the second reference object having the same type as the second object, and the second reference event having the same type as the second event; and obtaining the second loss based on a difference between the second event and the second prediction of the second event. ” ( claim 3/1 4 ), “ determining an interaction feature based on the first and second features; and creating the combination feature by a concatenation of the first feature, the interaction feature, and the second feature. ” ( claim 4 ), “ determining a first loss between the first event and a prediction of the first event that is determined based on the first and second features; and updating the first encoder based on the first loss. ”( claim 5 ) “ determining the prediction of the first event based on the combination feature and a first decoder describing an association between the reference feature and a first reference event that is associated with the first and second reference objects, the first reference event having the same type as the first event; and obtaining the first loss based on a difference between the first event and the prediction of the first event. ” ( claims 6 ), “ updating the first decoder based on any of the first or second loss; or updating the second decoder based on any of the first or second loss. ” (c laim 7/15 ), “ obtaining a data repository that comprises a plurality of data items associated with the first and second objects, and the first and second events; wherein: the first event is obtained by extracting, from the data repository, at least one data item corresponding to the first event based on a definition of the data repository; the second event is obtained by extracting, from the data repository, at least one data item corresponding to the second event based on the definition of the data repository; the first object is obtained by extracting, from the data repository, at least one data item corresponding to the first object based on the definition of the data repository; and the second object is obtained by extracting, from the data repository, at least one data item corresponding to the second object based on the definition of the data repository. ” (claim 8/1 7 ) ; “determining a frequency rate between a first occurrence frequency of the first event and a second occurrence frequency of the second event, the first occurrence frequency being above the second occurrence frequency; and obtaining the first and second events based on the frequency rate.”( claim 9 /16 ); “ the first object comprises one of: a user of an application, and data that is provided to the user of the application; the second object comprises a further one of the user and data; and the first event comprises any of: a click event or an open event, and the second event comprises any of: a subscription event, an order event, a download event, an adding-to-bag event, a following event, or a comment event, the second event occurring after the first event.” (claim 10 /18 ); “extracting a feature of an object based on the first encoder; and implementing a downstream task of the object based on the extracted feature .” ( Claim 1 1/19 ) These additional limitations ( in claims 2 - 11, and 1 3 - 19) also constitute concepts performed Mathematical concept or mathematical operation groupings of abstract ideas. This judicial exception is not integrated into a practical application. Additional elements “computer readable medium comprising: computer program code ( in claims 2 -11 and, 13- 19) all amount to no more than adding insignificant extra-solution activity/specifications related to data gathering, data input, or data transmittal. These additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The dependent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of non-transitory computer readable medium comprising: computer program code are again insignificant extra-solution activity steps that cannot provide an inventive concept. All of these additional elements as generically claimed are considered well-understood, routine, and conventional. Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, all of the dependent claims are also not patent eligible. Examiner Comments 13. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 14. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 15. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over LI (Pub. No . US 20110131166 A1 , Pub. Date 2011-06-02 ) in view of Shazeer (Pub. No . US 20200089755 A1 , Pub. Date : 2020-03-19 ) Li teaches a method for feature management (see Li : Abstract, “ Using the model, a probability of a predicted user attribute based on the sample user behavior is predicted .”) , comprising: obtaining a first event associated with a first and a second object (see Li : Fig.3, [0033], “as such users view media content, such viewing behavior may be tracked and/or collected (e.g., recorded) by the media content delivery host (e.g., via cookies, web bug, and/or other tracking mechanism)”, i.e. the first object is the USER, second object is the item or content and the first event is the interaction (click/view content )), and obtaining a second event associated with the first and second events a type of the first event being different from a type of the second event (see Li : Fig 3 , [ 003 4 ], “t he collection step 302 may also attempt to determine which media content a user is interested in. For example, media content that has only been partially viewed by a user may be filtered out of the sample behaviors that are collected. Alternatively, a threshold level/percentage of media content viewed may be used. ”, i.e. the viewing of a content is the second event associated with the first and second event (click/View content by use) and the two events are in district categories). As shown above, Li teaches collecting and analyzing user-item interaction events (e.g. clicks, views, purchases) to model user behavior. Li further discloses using multiple types of interaction events associated with the same user and item to improve prediction and recommendation performance, The system from this feature and updates models based on the interaction data to generate prediction. Li does not teach the method wherein: determining a first feature of the first object based on a first encoder, and determining a second feature of the second object based on a second encoder; and updating the first encoder based on the first and second features and the first and second events ; However, Shazeer teaches the method wherein: determining a first feature of the first object based on a first encoder (see Shazeer : Fig.1, [0043], “ The multi task multi modal machine learning model 100 includes multiple input modality neural networks 102a-102c, an encoder neural network 104, a decoder neural network 106, and multiple output modality neural networks 108a-108c. Data inputs, e.g., data input 110, received by the multi task multi modal machine learning model 100 are provided to the multiple input modality neural networks 102a-102c and processed by an input modality neural network corresponding to the modality (domain) of the data input. ”); , and determining a second feature of the second object based on a second encoder (see Shazeer : Fig.1, [004 9 ], “ The encoder neural network 104 is a neural network that is configured to process mapped data inputs from the unified representation space, e.g., mapped data input 112, to generate respective encoder data outputs in the unified representation space, e.g., encoder data output 114. ”); and updating the first encoder based on the first and second features and the first and second events (see Shazeer : Fig. 6 , [ 0082 ], “ t he system processes the decoder output using the selected output modality neural network to generate data representing an output of the second modality of the machine learning task (step 612 ). ” , i.e. updating encoder parameters by training workflow of Fig.6) ) Because both Li and Shazeer are in the same/similar field of endeavor of machine learning training data, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Li to include the system that determin e a first feature of the first object based on a first encoder, and determining a second feature of the second object based on a second encoder and update the first encoder based on the first and second features and the first and second events as taught by Shazeer . One would have been motivated to make such a combination to improve performance of the model when performing tasks in different domains, particularly when the tasks in the different domains have limited quantities of training data available. (see Shazeer , [0029]) Regarding Claim 2 , As shown above, Li and Shazeer and teaches all the limitations of claim 1. Shazeer further teaches the system of updating the first encoder comprises: determining a second loss between the second event and a prediction of the second event that is determined based on the first and second features (see Shazeer : Fig. 6 , [ 0080 ] , “ The system processes the mapped input of the unified representation space using an encoder neural network and a decoder neural network to generate a decoder output (step 608). The decoder output represents a representation of an output of the machine learning task in the unified representation space.”) ; and updating the first encoder based on the second loss (see Shazeer : Fig. 6 , [ 0082 ], “ t he system processes the decoder output using the selected output modality neural network to generate data representing an output of the second modality of the machine learning task (step 612 ).”, i.e. updating encoder parameters by training workflow of Fig.6)) I t would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Li to include the system that determining a second loss between the second event and a prediction of the second event that is determined based on the first and second features and updating the first encoder based on the second loss as taught by Shazeer . One would have been motivated to make such a combination to improve performance of the model when performing tasks in different domains, particularly when the tasks in the different domains have limited quantities of training data available. (see Shazeer, [0029]) Regarding Claim 3 , As shown above, Li and Shazeer and teaches all the limitations of claim 2 . Shazeer further teaches determining the second loss comprises: generating a combination feature based on the first and second features (see Shazeer : Fig. 1 , [ 0043], “ he multi task multi modal machine learning model 100 includes multiple input modality neural networks 102a-102c, an encoder neural network 104, a decoder neural network 106 .”) determining the prediction of the second event based on the combination feature and a second decoder describing an association between a reference feature that is related to a first and a second reference object, and a second reference event that is associated with the first and second reference objects, the first reference object having the same type as the first object, the second reference object having the same type as the second object, and the second reference event having the same type as the second event (see Shazeer : Fig. 3 , [ 0051], “ he encoder neural network 104 and decoder neural network 106 may include neural network components from multiple machine learning domains. For example, the encoder neural network 104 and decoder neural network 106 may include (i) one or more convolutional neural network layers, e.g., a stack of multiple convolutional layers with various types of connections between the layers, (ii) one or more attention neural network layers configured to perform respective attention mechanisms, (iii) one or more sparsely gated neural network layer .”) ; and obtaining the second loss based on a difference between the second event and the second prediction of the second event (see Shazeer : Fig. 1 , [ 0047], “ Received machine learning model data inputs may include data inputs from different modalities with different sizes and dimensions. For example, data inputs may include representations of images, audio or sound waves. Similarly, each output modality neural network of the multiple output modality networks 108a-c is configured to map data outputs of the unified representation space received from the decoder neural network, e.g., decoder data output 116, to mapped data outputs of one of the multiple modalities .”) See motivation to combine Li and Shazeer in claim 1. Regarding Claim 4 , As shown above, Li and Shazeer and teaches all the limitations of claim 3 . LI further teaches generating the combination feature comprises: determining an interaction feature based on the first and second features (see Li : Fig . 3 , [003 4 ], “t he collection step 302 may also attempt to determine which media content a user is interested in. For example, media content that has only been partially viewed by a user may be filtered out of the sample behaviors that are collected. Alternatively, a threshold level/percentage of media content viewed may be used. ”, i.e. the viewing of a content is the second event associated with the first and second event (click/View content by use) and the two events are in district categories). ; and creating the combination feature by a concatenation of the first feature, the interaction feature, and the second feature (see Li : Fig . 3, [00 50 ], “ For example, assume that sample data set (of training values) provides for corresponding ( input, output ) values (e.g., [certain input behavior, output attribute]) of (1.0, 0.9), (1.0, 0.0), (1.0, 0.2), (1.0, 5.0), (1.0, 0.2). The network may be trained with such values and assume it results in a trained output value of 0.2 based on an input of 1.0. In step 306, the sample data set is processed by the network to produce the probabilities. ”) Regarding Claim 5 , As shown above, Li and Shazeer and teaches all the limitations of claim 1. LI further teaches updating the first encoder further comprises: determining a first loss between the first event and a prediction of the first event that is determined based on the first and second features (see Li : Fig.3, [00 48 ], “after training 304, to utilize the model, the outputs 408 need to be turned into probabilities. Thus, at step 306, the probabilities of output attributes are predicted based on the users' behaviors based on the model. In the case of Boolean values (e.g., gender), there may only be a single output 408. However, in the case of non-boolean values (e.g., income), the output layer 408 may have several nodes representing the different outputs such as income amounts (e.g., $0-$15K, $15K-$25K, $25K-$50K, etc.). ”) ; and updating the first encoder based on the first loss (see Li : Fig.3, [00 59 ], “With the fuzzily determined user's demographic attributes from step 308, the user experiences can be improved at step 310. As described above, such attributes can be used in many applications such as personalized recommendations or advertisement targeting to improve the user experience. For example, the attributes of a user may be used to provide flexibility for an advertiser desiring to target a particular user base. Regarding Claim 6 , As shown above, Li and Shazeer and teaches all the limitations of claim 1. Shazeer further teaches of determining the first loss comprises: determining the prediction of the first event based on the combination feature and a first decoder describing an association between the reference feature and a first reference event that is associated with the first and second reference objects, the first reference event having the same type as the first event (see Shazeer : Fig. 3 , [ 0051], “ he encoder neural network 104 and decoder neural network 106 may include neural network components from multiple machine learning domains. For example, the encoder neural network 104 and decoder neural network 106 may include (i) one or more convolutional neural network layers, e.g., a stack of multiple convolutional layers with various types of connections between the layers, (ii) one or more attention neural network layers configured to perform respective attention mechanisms, (iii) one or more sparsely gated neural network layer .”) ; and obtaining the first loss based on a difference between the first event and the prediction of the first event (see Shazeer : Fig. 1 , [ 0047], “ Received machine learning model data inputs may include data inputs from different modalities with different sizes and dimensions. For example, data inputs may include representations of images, audio or sound waves. Similarly, each output modality neural network of the multiple output modality networks 108a-c is configured to map data outputs of the unified representation space received from the decoder neural network, e.g., decoder data output 116, to mapped data outputs of one of the multiple modalities .”) See motivation to combine Li and Shazeer in claim 1. Regarding Claim 7 , As shown above, Li and Shazeer and teaches all the limitations of claim 1. LI further teaches the system wherein: updating the first decoder based on any of the first or second loss (see Shazeer : Fig. 3 , [0050], The decoder neural network 106 is a neural network, e.g., an autoregressive neural network, that is configured to process encoder data outputs from the unified representation space, e.g., encoder data output 114, to generate respective decoder data outputs from an output space, e.g., decoder data output 116. An example decoder ne ural network is illustrated and described in more detail below with reference to FIG. 5. ; or updating the second decoder based on any of the first or second los s (see Shazeer : Fig.1, [0050 ], “In some implementations the multiple input modality neural networks 102a-c and multiple output modality neural networks 108a-c may include language modality neural networks.”) See motivation to combine Li and Shazeer in claim 1. Regarding Claim 8 , As shown above, Li and Shazeer and teaches all the limitations of claim 1. LI further teaches the system wherein: obtaining a data repository that comprises a plurality of data items associated with the first and second objects, and the first and second events (see Li : Fig.3, [0042], “ In view of the above, the model is trained for numerous different sample users. However, as one might assume, different users that watch the same media content (i.e., their behavior) may have different attributes (e.g., male v. female). The neural network is trained using multiple users' behavioral data and attributes. Accordingly, after processing the various edges/links for one user, the behavioral and attribute data for different users are used to adjust those values/weights. In this manner, the neural network reflects/considers a broad range of the sample users’ behavior and attributes collected at step 302. After all of the training at step 304 is complete, the inputs may have certain values 402 but the outputs 408 and weights will not match every (or potentially any) user exactly. ”), wherein : the first event is obtained by extracting, from the data repository, at least one data item corresponding to the first event based on a definition of the data repository (see Li : Fig.3, [00 60 ], “T he user's watching history may be recorded (e.g., at step 302). If the user has watched several complete videos of a show, it can be determined that the user is interested in this show. An attempt is made (at step 302) to find all shows that each user might be interested in. The sample users' attributes and watching behaviors are used to train the predicting model for a "gender" demographic at step 304. ”) the second event is obtained by extracting, from the data repository, at least one data item corresponding to the second event based on the definition of the data repository (see Li : Fig.3, [0033], “ the first object is obtained by extracting, from the data repository, at least one data item corresponding to the first object based on the definition of the data repository (see Li : Fig.3, [00 61 ], ” At step 306, the model is used to predict a user's "gender" from the user's watching behaviors. For example, if the probability of the user to be "male" is 80% (so the probability to be "female" is 20%), a soft decision (via step 308) can be used to determine that the user should be "male" in 80% probability, or a hard decision can be used to determine that the user is "male". If it is known that the user is male, more shows can be recommended to the user that men like at step 310. ”), and the second object is obtained by extracting, from the data repository, at least one data item corresponding to the second object based on the definition of the data repository (see Li : Fig.3, [00 50 ], “ For example, assume that sample data set (of training values) provides for corresponding ( input, output ) values (e.g., [certain input behavior, output attribute]) of (1.0, 0.9), (1.0, 0.0), (1.0, 0.2), (1.0, 5.0), (1.0, 0.2). The network may be trained with such values and assume it results in a trained output value of 0.2 based on an input of 1.0. In step 306, the sample data set is processed by the network to produce the probabilities. ”) Regarding claim 9 , As shown above, Li and Shazeer and teaches all the limitations of claim 1. LI further teaches the system wherein: obtaining the first and second events (see Li : Fig.3, [0033], “ step 302, sample users' behaviors and attributes are collected. As described above, some users are registered or have signed up with a media content delivery host (e.g., a website). The attributes of such users may be provided or known (e.g., gender, age, location, etc.). Further, as such users view media content, such viewing behavior may be tracked and/or collected (e.g., recorded) by the media content delivery host (e.g., via cookies, web bug, and/or other tracking mechanism). ”), comprises: determining a frequency rate between a first occurrence frequency of the first event and a second occurrence frequency of the second event, the first occurrence frequency being above the second occurrence frequency (see Li : Fig.3, [0049], “ o produce the probabilities, the training data (i.e., with the known inputs 402 and outputs 408) are input back into the model. As described, above, since the training combines multiple different users, the output values (i.e., attributes) in nodes 408 produced from known inputs (i.e., behaviors) are unlikely to produce the actual known corresponding outputs ”) ; and obtaining the first and second events based on the frequency rate (see Li : Fig.3, [0049], “ Once the known input values 402 are processed by the model to produce output values 408, the distribution of the results may be examined. The distribution may be examined using a fit algorithm (e.g., least squared fit) to determine where a new data point (i.e., a predicted attribute or output value 408) from a person lies. The actual known output values corresponding to a particular input value may then be compared to the produced output values 408 to compute a probability. ”) Regarding claim 10 , As shown above, Li and Shazeer and teaches all the limitations of claim 1. LI further teaches the system wherein: the first object comprises one of: a user of an application, and data that is provided to the user of the application; the second object comprises a further one of the user and data (see Li : Fig.3, [00 60 ], “ the user's watching history may be recorded (e.g., at step 302). If the user has watched several complete videos of a show, it can be determined that the user is interested in this show. An attempt is made (at step 302) to find all shows that each user might be interested in. The sample users' attributes and watching behaviors are used to train the predicting model for a "gender" demographic at step 304. ”) ; and the first event comprises any of: a click event or an open event, and the second event comprises any of: a subscription event, an order event, a download event, an adding-to-bag event, a following event, or a comment event, the second event occurring after the first event (see Li : Fig.3, [0034], “ The collection step 302 may also attempt to determine which media content a user is interested in. For example, media content that has only been partially viewed by a user may be filtered out of the sample behaviors that are collected. Alternatively, a threshold level/percentage of media content viewed may be used. In yet another alternative, a minimum number of viewings of episodes of a particular show may be required before a determination of interest in a particular show is made. ”) Regarding claim 11, As shown above, Li and Shazeer and teaches all the limitations of claim 1. LI further teaches the system wherein: extracting a feature of an object based on the first encoder (see Li : Fig.3, [0052], “ Once the probabilities of the attributes have been predicted at step 306, the user's attributes can be determined using fuzzy logic at step 308. Fuzzy logic is a superset of conventional (Boolean) logic that has been extended to handle the concept of partial truth--truth values between "completely true" and "completely false" ) ; and implementing a downstream task of the object based on the extracted feature (see Li : Fig.3, [0052], “ Applying such a concept to the embodiments of the present invention provides the ability to process a user's behavior (as input) to produce a range representing the probability the user will have an attribute corresponding to that range. Accordingly, a soft decision is made to determine the predicted probabilities from step 306 so that each demographic attribute can be fuzzily determined. ” Regarding independent claim 12, Claim 12 is directed to a n electronic device claim and has similar/same claim limitation as claim 1 and is rejected under same rationale. Regarding Claim 13 , Claim 13 is directed to a device claim and have similar/same claim limitation as Claim 2 and is rejected under same rationale. Regarding Claim 1 4 , Claim 1 4 is directed to a system claim and has similar/same claim limitation as C laim 3 and is rejected under same rationale. Regarding Claim 15 , Claim 1 5 is directed to a system claim and has similar/same claim limitation as C laim 7 and is rejected under same rationale. Regarding Claim 16 , Claim 1 6 is directed to a non-transitory computer program product claim and has similar/same claim limitation as C laim 9 and is rejected under same rationale. Regarding Claim 17 , Claim 1 7 is directed to a non-transitory computer program product claim and has similar/same claim limitation as C laim 8 and is rejected under same rationale. Regarding Claim 18 , Claim 1 8 is directed to a non-transitory computer program product claim and has similar/same claim limitation as C laim 1 0 and is rejected under same rationale. Regarding Claim 19 Claim 1 9 is directed to a non-transitory computer program product claim and has similar/same claim limitation as C laim 1 1 and is rejected under same rationale. Regarding independent Claim 20 , Claim 20 is directed to a non-transitory computer program product claim and has similar/same claim limitation as Claim 1 and is rejected under same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. PGPUB NUMBER: INVENTOR-INFORMATION: TITLE / DESCRIPTION US 12530433 B1 Tabak; Tom Title : Robust Record-to-Event Conversion System Description : Systems and methods are disclosed comprising techniques for record-to-event conversion, such as retrieving at least one alphanumeric record associated with a monitored digital communication transmitted among two or more users, generating a time-enumerated data structure that stores an event entry set for the monitored digital communication, US 20230222387 A1 Lakshmipathy; Sathish Kumar Title : PREDICTIVE, MACHINE-LEARNING, EVENT-SERIES COMPUTER MODELS WITH ENCODED REPRESENTATION Description : The present disclosure relates generally to predictive computer models and, more specifically, to predictive, machine-learning, time-series computer models suitable for sparse training sets. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FILLIN "Examiner name" \* MERGEFORMAT ZELALEM W SHALU whose telephone number is FILLIN "Phone number" \* MERGEFORMAT (571)272-3003 . The examiner can normally be reached FILLIN "Work Schedule?" \* MERGEFORMAT M- F 0800am- 0500pm . Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FILLIN "SPE Name?" \* MERGEFORMAT Cesar Paula can be reached at FILLIN "SPE Phone?" \* MERGEFORMAT (571) 272-4128 . The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Zelalem Shalu/ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Jul 26, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12477016
AUTOMATION OF VISUAL INDICATORS FOR DISTINGUISHING ACTIVE SPEAKERS OF USERS DISPLAYED AS THREE-DIMENSIONAL REPRESENTATIONS
2y 5m to grant Granted Nov 18, 2025
Patent 12468969
METHODS FOR CORRELATED HISTOGRAM CLUSTERING FOR MACHINE LEARNING
2y 5m to grant Granted Nov 11, 2025
Patent 12419611
PATIENT MONITOR, PHYSIOLOGICAL INFORMATION MEASUREMENT SYSTEM, PROGRAM TO BE USED IN PATIENT MONITOR, AND NON-TRANSITORY COMPUTER READABLE MEDIUM IN WHICH PROGRAM TO BE USED IN PATIENT MONITOR IS STORED
2y 5m to grant Granted Sep 23, 2025
Patent 12153783
User Interfaces and Methods for Generating a New Artifact Based on Existing Artifacts
2y 5m to grant Granted Nov 26, 2024
Patent 12120422
SYSTEMS AND METHODS FOR CAPTURING AND DISPLAYING MEDIA DURING AN EVENT
2y 5m to grant Granted Oct 15, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
29%
Grant Probability
48%
With Interview (+19.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month