Prosecution Insights
Last updated: April 19, 2026
Application No. 18/351,790

RECOMMENDATION INFORMATION PRESENTATION DEVICE, OPERATION METHOD OF RECOMMENDATION INFORMATION PRESENTATION DEVICE, OPERATION PROGRAM OF RECOMMENDATION INFORMATION PRESENTATION DEVICE

Final Rejection §101§103
Filed
Jul 13, 2023
Examiner
SIRJANI, FARIBA
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Fujifilm Corporation
OA Round
2 (Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
414 granted / 547 resolved
+13.7% vs TC avg
Strong +31% interview lift
Without
With
+31.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
578
Total Applications
across all art units

Statute-Specific Performance

§101
14.1%
-25.9% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
14.7%
-25.3% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 1-10 are pending. Claims 1 and 7-8 are independent and are amended. New Claims 9-10 are added that depend from 1. This Application was published as U.S. 20230351122. Apparent priority: 19 February 2021. Applicant’s amendments and arguments are considered but are either unpersuasive or moot in view of the new grounds of rejection that, if presented, were necessitated by the amendments to the Claims. This action is Final. Response to Amendments and Arguments Arguments are moot in view of the new grounds responsive to amendments. Claim 1 is amended as follows and the other independents similarly: 1. A recommendation information presentation device comprising: a processor; and a memory connected to or built into the processor, wherein the processor receives a story creation request from a user terminal via a network, analyzes an image held by a user to generate analysis information, upon reception of the story creation request, inputs the analysis information to a machine learning model for story creation and causes a story configured of a set of sentences describing a fictitious event based on the analysis information to be output from the machine learning model for story creation, generates recommendation information according to the story, and distribute the story and the recommendation information to the user terminal via the network, the story and the recommendation information being displayed on a display of the user terminal. 35 USC 101 Rejection Applicant argues that the limitations of the independent Claims do not fall within the mental process category and additionally they make a remarkable contribution to an inventive concept. Response at 5. In Reply: Sometimes the claims are so broad that the problem is not with "cannot be practically performed in the human mind." Rather, the problem is that the Claims do not set any concrete and workable criteria for a Machine to perform these operations such that nothing other than a human mind is capable of performing them. A machine requires more detail as to what it is supposed to do whereas a human can make his own decisions based on broad instructions. The Claims are NOT considered to include the level of detail suitable for a machine considering the level of skill in the art. As the mapping to the human interaction in the Rejection below shows, they pertain to a person looking at pictures and coming up with a story. The additional elements of processor and memory are quite generic. The only component that could contribute to a technological invention is the use of the machine learning model. However, a person sitting in front of a computer and feeding pictures to the computer and asking it to write a story is not an invention. Rather, the specifics of the machine learning model need to be claimed. Nonfinal Rejection of 6/18/2025, p.2, provided: PNG media_image1.png 160 666 media_image1.png Greyscale Please heed the above request and include the technological features of the instant Application inside the Claim. 35 USC 103 Rejection Applicant’s arguments are directed to the added limitations and also argue that the reference does not generate a Story. Response at 6. Added language is addressed with new or modified grounds. Regarding the “Story” argument: Kasina is directed to “Constructing A Narrative Based On A Collection Of Images” and a narrative and story are the same thing unless your Claim defines the story to distinguish it from the “narrative” of Kasina. The output of Kasina is shown in Figure 3, e.g., and looks very much like a story based on input images which is what the Claim asks for: [0012] FIG. 3 shows an excerpt of an annotated album produced by the system of FIG. 1. PNG media_image2.png 576 482 media_image2.png Greyscale Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. This Application has a comprehensive Disclosure with 23 detailed Drawings. More of the Disclosure needs to be inside the Claims to overcome the Abstract idea rejection below and it is fortunate that the Disclosure is not sparse and does include substance to draw from. Step 1: The independent Claims are directed to statutory categories: Claim 1 is a system claim and directed to the machine or manufacture category of patentable subject matter. Claim 7 is a method claim and directed to the process category of patentable subject matter. Claim 8 is a computer-readable-storage device claim and is directed to the machine or manufacture category of patentable subject matter. Step 2A, Prong One: Does the Claim recite a Judicially Recognized Exception? Abstract Idea? Are these Claims nevertheless considered Abstract as a Mathematical Concept (mathematical relationships, mathematical formulas or equations, mathematical calculations), Mental Process (concepts performed in the human mind (including an observation, evaluation, judgment, opinion), or Certain Methods of Organizing Human Activity (1-fundamental economic principles or practices (including hedging, insurance, mitigating risk), 2-commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations), 3- managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) and fall under the judicial exception to patentable subject matter?) The rejected Claims recite Mental Processes or Methods of Organizing Human Activity. Step 2A, Prong Two: Additional Elements that Integrate the Judicial Exception into a Practical Application? Identifying whether there are any additional elements recited in the claim beyond the judicial exception(s), and evaluating those additional elements to determine whether they integrate the exception into a practical application of the exception. “Integration into a practical application” requires an additional element(s) or a combination of additional elements in the claim to apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception. Uses the considerations laid out by the Supreme Court and the Federal Circuit to evaluate whether the judicial exception is integrated into a practical application. The rejected Claims do not include additional limitations that point to integration of the abstract idea into a practical application and are therefore directed to the abstract idea. Claim 1 is a generic automation of a mental process of looking at a picture and narrating the story it may be telling. The Claim is stated at a very high level and does not include details that are necessary and suited for 1. A recommendation information presentation device comprising: a processor; and a memory connected to or built into the processor, wherein the processor receives a story creation request from a user terminal via a network, [Jack receives a request from his editor who is chatting with him via their mobile phones.] analyzes an image held by a user to generate analysis information, upon reception of the story creation request, [Jack looks at the cartoons in a comic book that he is going to narrate and tries to make out a story from the depictions.] inputs the analysis information to a machine learning model for story creation and [The machine learning model is a technological element that is stated at a high level and without nexus to the remainder of the Claim and as such is not sufficient to amount to an integration into a practical application (2A,2) or cause the Claim as a whole to amount to significantly more than the underlying abstract idea (2B).] causes a story configured of a set of sentences describing a fictitious event based on the analysis information to be output from the machine learning model for story creation, [Jack writes down what he sees in the depictions to create a story for the comic book.] generates recommendation information according to the story, and [Jack comes up with alternative dialogs for the characters in the depictions.] distribute the story and the recommendation information to the user terminal via the network, the story and the recommendation information being displayed on a display of the user terminal. [Jack writes down his suggested dialogs for the editor of the comic book.] Step 2B: Search for Inventive Concept: Additional Elements Do not amount to Significantly More: The additional limitations of processors and memory are well-understood, routine, and conventional machine components that are being used for their well-understood, routine, conventional and rather generic functions. The limitation of “machine learning model” is included at a high and general level; we need either information about the inner workings of the model or at the least any special preparation for the training of the model or inputs to it; otherwise the model sits in the Claim as a blackbox that is disconnected from workings of the Claim. All of these limitations are expressed parenthetically and lack nexus to the Claim language and as such are a separable and divisible mention to a machine. Accordingly, they are not sufficient to cause the Claim as a whole to amount to significantly more than the underlying abstract idea. The Dependent Claims do not add limitations that could integrate the abstract idea into a practical technological application (2A, prong 2) or could help the Claim as a whole to amount to significantly more (2B) than the Abstract idea identified for the Independent Claim: 2. The recommendation information presentation device according to claim 1, wherein the processor generates, as the analysis information, at least one of content analysis information obtained by analyzing a content of the image, personality-preference analysis information obtained by analyzing a personality preference of the user, or processed personality-preference analysis information that is information obtained by processing the personality-preference analysis information and represents a personality preference different from the personality preference of the user. [Jack knows that the comic book readers prefer a certain set of traits for superman which is a personality depicted in the comic book.] 3. The recommendation information presentation device according to claim 2, wherein the processor generates the content analysis information from the image by using a machine learning model for content analysis. [This is telling a machine: this is the task, do it. The Claim needs a level of detail required by the machine/model to perform the task and such detail needs to be in the Claim to give meaning to the machine component that is expressed as a general blackbox.] 4. The recommendation information presentation device according to claim 2, wherein the processor generates the personality-preference analysis information from the content analysis information by using a personality-preference conversion dictionary. [Jack refers to a book that includes descriptions of the different characters such as superman and batman.] 5. The recommendation information presentation device according to claim 1, wherein the processor selects the recommendation information according to the story from a plurality of pieces of the recommendation information registered in advance. [Jack has a booklet of his previous recommendations and selects from them.] 6. The recommendation information presentation device according to claim 1, wherein the processor inputs an auxiliary motif that assists in creating the story to the machine learning model for story creation, in addition to the analysis information. [Jack can access am “auxiliar motif” in generation of his narration. What is an “auxiliary motif”? Please define inside the Claim and include in the Claim how this factor assists in the narration. Published Application: “[0103] The auxiliary motif 125 is a word that assists in creating the story 74. The auxiliary motif 125 is a word input by the user 13 on the story creation instruction screen 105. Alternatively, the auxiliary motif 125 is prepared by the creation unit 65 selecting an appropriate word from the dictionary stored in the storage 40A….” and Figure 23 provide support for “auxiliary motif” and more of the supporting disclosure needs to be in the Claim.] 9. The recommendation information presentation device according to claim 1, wherein the processor generates, as the analysis information, at least one of personality-preference analysis information obtained by analyzing a personality preference of the user, or processed personality-preference analysis information that is information obtained by processing the personality-preference analysis information and represents a personality preference different from the personality preference of the user. [Jack knows the preference of the editor and takes it into consideration.] 10. The recommendation information presentation device according to claim 1, wherein the processor generates, as the analysis information, processed personality- preference analysis information that is information obtained by processing personality-preference analysis information obtained by analyzing a personality preference of the user, and [Jack knows the preference of the editor and takes it into consideration.] wherein the processed personality-preference analysis information is obtained by replacing a word representing the personality preference of the user included in the personality-preference analysis information with a word opposite to the word. [Jack, in order to generate alternatives, generates recommendations that would be contrary to the editor’s liking in order to provide perspective and options.] Independent Claim 7 and independent Claim 8, have limitations similar to the limitations of Claim 1 and do not include additional limitations that can 1) integrate the Abstract Idea into a practical application or 2) cause the Claim as a whole to amount to more than the underlying abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over Kasina (U.S. 20180150444) in view of Kakinuma (U.S. 20190287154). Regarding Claim 1, Kasina teaches: 1. A recommendation information presentation device comprising: a processor; and [Kasina, Figure 15 teaches the hardware including “processing devices such as CPUs GPUs etc. 1504” and “storage resources such as RAM, ROM, Had Disks, Flash Drives, Optical Disks, Magnetic media, etc. 1506.”] a memory connected to or built into the processor, [Kasina, Figure 15, “storage resources such as RAM, ROM, Had Disks, Flash Drives, Optical Disks, Magnetic media, etc. 1506” connected by “bus 1524.”] wherein the processor receives a story creation request from a user terminal via a network, [Kasina, Figure 1, user is shown at the “user computing device 114.” Figure 15 showing the “Network Interfaces 1520” and the “Communication Conduit 1522” which teaches the network of the Claim. Figure 15 also depicts types of user terminals/devices at the bottom of the Figure. “[0041] The end user may interact with the narrative creation engine 108 via a user computing device 114. The user computing device 114 may corresponding to any of a stationary computing workstation, a laptop computing device, a game console, a smartphone or any other type of handheld computing device, a set-top box, a wearable computing device, etc. In one case, the user computing device 114 may correspond to a separate device with respect to the image capture devices 104. In other case, the user computing device 114 may incorporate one or more of the image capture devices 104.” The receiving of the request is taught by receiving the images. Figure 14, 1406: “[0151] In block 1406, the narrative creation engine 108 receives a set of input images from an end user, via a user computing device….” Because in response to this input, the system moves on to generating the album narrative at 1412 and 1414.] analyzes an image held by a user to generate analysis information, upon reception of the story creation request, [Kasina, Figure 1 shows the “image capture devices /ICD 104” which are providing images to the “narrative creation engine 108” which are analyzed at the “Album Processing Component 120.” Which is receiving the “set of input images.” Set of input images are shown in Figure 2, 202. Figure 4, shows features being extracted by the “primary feature extraction component 408” operating on the images / “source content items 406.” Figure 7 shows the stages of analysis and Figure 8 is a flowchart staring by “feature extraction component 804” which is analyzing the “input source image …” Figure 14, input of the images at 1406 leads to the analysis of images at 1408 and 1410.] inputs the analysis information to a machine learning model for story creation and [Kasina, Figure 4, “secondary attribute extraction components 412” is receiving the features extracted by the “primary feature extraction component 408” operating on the images / “source content items 406.” “[0068] …For instance, the secondary attribute extraction component 412 can include one or more machine-learned statistical models which map the image features (provided by the primary attribute extraction component 408) into one or more classification results.” ] causes a story configured of a set of sentences describing a fictitious event based on the analysis information to be output from the machine learning model for story creation, [Kasina, Figure 1, “narrative creation component 126” and “annotated album creation component 128” generate a story based on the pictures as shown in Figure 3. “[0004] A computer-implemented technique is described for automatically (or semi-automatically) generating a textual narrative based on a set of input images. In one scenario, the end user captures the set of input images while visiting one or more locations. The generated narrative describes the user's travel experience at those locations in a cohesive manner.”] generates recommendation information according to the story, and [Kasina, the annotations that are presented to the user in the annotated album 302 of Figure 3 teach the “recommendation information” of the Claim because they are “according to the story” that is generated.] distribute the story and the recommendation information to the user terminal via the network, the story and the recommendation information being displayed on a display of the user terminal. [Kasina, Figure 1, the “annotated album” generated by the “annotated album creation component 128” is being presented back to the user in the “user interface component 118.”] While the current broad language of the Claim is taught by Kasina, a reference is added that teaches the “recommendation information” of the Claim. Kasina teaches that the user preferences and expectations may be considered in the handling the user data ([0183]) but does not mention using the user preferences in recommending a particular image to the user. (Instant Application: “An image searching apparatus includes processing circuitry configured to calculate an evaluation value that indicates a preference of a customer, by using an image feature amount based on an image; and identify an image to be recommended to the customer by using the calculated evaluation value, from among images of a plurality of articles.” Abstract.) Kakinuma teaches: generates recommendation information according to the story, and [Kakinuma, “An image searching apparatus includes processing circuitry configured to calculate an evaluation value that indicates a preference of a customer, by using an image feature amount based on an image; and identify an image to be recommended to the customer by using the calculated evaluation value, from among images of a plurality of articles.” Abstract. See Figure 1, the “image searching apparatus 120” searches the “image information” in view of the “customer information” and outputs the “recommendation image information.”] distribute the story and the recommendation information to the user terminal via the network, the story and the recommendation information being displayed on a display of the user terminal. [Kakinuma, Figure 1, output of the “recommendation image information” to the user 180. Paragraphs [0039]-[0042] teach the types of factors that cause an image to be selected and recommended. The recommendation is presented in an advertisement to the user.] Kasina and Kakinuma pertain to image characterization and identification and it would have been obvious to combine the recommendation function of Kakinuma in order to present images and narration/caption that is better suited to the preferences of the user. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Regarding Claim 2, Kasina teaches: 2. The recommendation information presentation device according to claim 1, wherein the processor generates, as the analysis information, at least one of content analysis information obtained by analyzing a content of the image, personality-preference analysis information obtained by analyzing a personality preference of the user, or processed personality-preference analysis information that is information obtained by processing the personality-preference analysis information and represents a personality preference different from the personality preference of the user. [Kasina, Figure 4, “primary attribute extraction component 408” analyzes the content of the “primary sources 404” which are images collected by the crawler. See also Figure 8 and “feature extraction component 804” which extracts features from the content of an image.] Regarding Claim 3, Kasina teaches: 3. The recommendation information presentation device according to claim 2, wherein the processor generates the content analysis information from the image by using a machine learning model for content analysis. [Kasina, Figure 4, “[0082] An optional model-generating component 428 generates one or more machine-learned statistical models. For instance, the model-generating component 428 can generate a single machine-learned statistical model that maps any set of attributes to a text phrase. The text phrase corresponds to an appropriate description of whatever image is associated with the input attributes. In one implementation, for instance, the model-generating component 428 can correspond to a recursive neural network (RNN), such as an encoder-decoder type of RNN. The encoder phase of the RNN maps an input set of input attributes to a vector in a semantic space, while the decoder phase of the RNN maps the vector to an output phrase….”] Regarding Claim 4, Kasina does not teach registering user preferences or user personality traits. (Note the Specification of the instant Application in this regard: “[0063] As shown in FIG. 9 as an example, the second analysis unit 64 applies the content analysis information 72 to the personality-preference conversion dictionary 52 to cause the personality-preference analysis information 73 to be output. A plurality of words representing the personality preference of the user 13 such as “social” and “outdoor lover” are registered in the personality-preference conversion dictionary 52. The second analysis unit 64 calculates a degree of similarity between the caption of the content analysis information 72 and the plurality of words of the personality-preference conversion dictionary 52….”) Kakinuma teaches: 4. The recommendation information presentation device according to claim 2, wherein the processor generates the personality-preference analysis information from the content analysis information by using a personality-preference conversion dictionary. [Kakinuma teaches the use of customer/user persona which includes characteristics/personality of the customer for determining the recommended image in an advertisement. The user/customer’s preferences are also used throughout Kakinuma and are the main focus of this reference. “[0203] In the above embodiments, it has been described that similar indices are used in the case where the image searching apparatus 120 is applied to the purchasing system and in the case where the image searching apparatus 120 is applied to the advertisement data generation system. However, the indices to be used may be changed, for each system to which the image searching apparatus 120 is applied. For example, in the case of the advertisement data generation system, in addition to qualitative indices for advertisement images, indices such as advertisement type, industry type, appeal axis, goal, and target persona, etc., may be included. Note that the advertisement type refers to a brand advertisement, a campaign advertisement, and a product announcement, etc. Furthermore, the appeal axis refers to price appeal, needs appeal, and enlightenment appeal, etc. Furthermore, persona refers to age, gender, family composition, and income, etc.”] Rationale for combination as provided for Claim 1. Regarding Claim 5, Kasina does not teach registering user preferences for recommending images. Kakinuma teaches: 5. The recommendation information presentation device according to claim 1, wherein the processor selects the recommendation information according to the story from a plurality of pieces of the recommendation information registered in advance. [Kakinuma has a database of user’s previous purchases and images of the items that the user has previously purchases. Figures 1, 9, 12: “evaluation image information, customer information” is an input. Figure 11, S1109. “[0114] In step S1112, the image identifying unit 904 refers to the analysis information 600 stored in the analysis information storage unit 160, and identifies an article image having the same or similar evaluation value as the calculated evaluation value for each index. Furthermore, the image identifying unit 904 outputs the identified article image as recommendation image information to an external device.” Figure 13 shows the calculation of the similarity levels with the previously registered values. Figure 15, “Purchase history information storage unit 1513” and “image information storage unit 110.” See Figure 17 tying together the purchased products images and date of purchase. Figure 18 showing the pieces of information going into the generation of the “recommendation image information S1810.”] Rationale for combination as provided for Claim 1. Regarding Claim 6, Kasina teaches: 6. The recommendation information presentation device according to claim 1, wherein the processor inputs an auxiliary motif that assists in creating the story to the machine learning model for story creation, in addition to the analysis information. [Kasina, Figure 1, the “auxiliary motif” is taught by the extra information provided by the “knowledge acquisition component 106” which provides context to the image. This information is used by the “narrative creation component 126” as shown in Figure 1 and ends up in the “annotated album.” “[0045] In a travel-rated context, the knowledge acquisition component 106 mines information from one or more primary knowledge sources that provide image-annotated narratives regarding travel to various locations. For instance, the knowledge acquisition component 106 can mine information from an online travel blog provided by Lonely Planet, of Melbourne, Australia. The knowledge acquisition component 106 operates by identifying source images in such a knowledge source, identifying attributes associated with those source images, and then identifying textual passages in the knowledge source which pertain to the source images. The knowledge acquisition component 106 stores information in the knowledgebase that links the identified attributes with the textual passages.”] Claim 7 is a method claim with limitations corresponding to the limitations of Claim 1 and is rejected under similar rationale. Claim 8 is a computer program product system claim with limitations corresponding to the limitations of method Claim 1 and is rejected under similar rationale. Regarding Claim 9, Kasina does not take into account user profile or preference except broadly as pertaining to security and privacy of his data: “[0183] In closing, the functionality described herein can employ various mechanisms to ensure that any user data is handled in a manner that conforms to applicable laws, social norms, and the expectations and preferences of individual users. For example, the functionality can allow a user to expressly opt in to (and then expressly opt out of) the provisions of the functionality. The functionality can also provide suitable security mechanisms to ensure the privacy of the user data (such as data-sanitizing mechanisms, encryption mechanisms, password-protection mechanisms, etc.).” Kakinuma teaches: 9. The recommendation information presentation device according to claim 1, wherein the processor generates, as the analysis information, at least one of personality-preference analysis information obtained by analyzing a personality preference of the user, or processed personality-preference analysis information that is information obtained by processing the personality-preference analysis information and represents a personality preference different from the personality preference of the user. [Kakinuma: determining the user/customer’s preferences is the main focus of this reference. The preferences are related to the personality or persona. “An image searching apparatus includes processing circuitry configured to calculate an evaluation value that indicates a preference of a customer …” Abstract. “[0033] A problem to be addressed by an embodiment of the present invention is to implement recommendations corresponding to the preference of a customer.” “[0203] … For example, in the case of the advertisement data generation system, in addition to qualitative indices for advertisement images, indices such as advertisement type, industry type, appeal axis, goal, and target persona, etc., may be included. …Furthermore, the appeal axis refers to price appeal, needs appeal, and enlightenment appeal, etc. Furthermore, persona refers to age, gender, family composition, and income, etc.”] Rationale for combination as provided for Claim 1. Kakinuma was combined for teaching the recommendation feature and recommendation according to user preference is a feature of recommendation. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Kasina and Kakinuma and further in view of Ley (U.S. 20100268661). Regarding Claim 10, Kasina does not teach the added feature. Kakinuma teaches: 10. The recommendation information presentation device according to claim 1, wherein the processor generates, as the analysis information, processed personality- preference analysis information that is information obtained by processing personality-preference analysis information obtained by analyzing a personality preference of the user, and [Kakinuma: determining the user/customer’s preferences is the main focus of this reference. The preferences are related to the personality or persona. “An image searching apparatus includes processing circuitry configured to calculate an evaluation value that indicates a preference of a customer …” Abstract. See the mapping provided for Claims 4 and 9. There are 61 occurrences of “preference of the customer” or “customer’s preference” in Kakinuma.] wherein the processed personality-preference analysis information is obtained by replacing a word representing the personality preference of the user included in the personality-preference analysis information with a word opposite to the word. Rationale for combination as provided for Claim 1. Kakinuma was combined for teaching the recommendation feature and recommendation according to user preference is a feature of recommendation. Kakinuma does not teach generating an opposite story. Levy teaches wherein the processed personality-preference analysis information is obtained by replacing a word representing the personality preference of the user included in the personality-preference analysis information with a word opposite to the word. [Levy is also directed to “Recommendation Systems” and teaches both related and opposite recommendations: “[0049] Another embodiment of this invention includes a recommendation system that uses both positive (i.e. related) and negative (i.e. opposite) correlations as the basis for weights to create estimated ratings….” “[0439] Furthermore, the neighborhood is created using the largest correlation values in terms of absolute value, such that large negative and positive correlations are used. As such, neighborhood items are also called predictive items, and neighborhood users are also called predictive users, rather than similar items or similar users, since they may be related or opposite. In other words, knowing a rating of a user that is opposite of the target user's taste or a rating of an item that is opposite of the target item's preferred users are both useful in estimating the rating. By using the largest correlation in terms of absolute value, the strongest predictive items and/or users are utilized, not ignored, thus reducing error. The results are accurate since local residual ratings are used, and the magnitude and sign of the weight is used to properly add or subtract the residual rating.” “[0569] In addition, the process can be done with related users, and users with the smallest similarity with users that acted upon the target item are used are the dislikes. Furthermore, this related user approach can be combined with the related items approach. Again, it's the opposite of the method to determine likely users.” See Figure 6: PNG media_image3.png 372 152 media_image3.png Greyscale ] Kasina/Kakinuma and Levy pertain to recommendation systems and it would have been obvious to combine the dislike/opposite/negative construction of Levy with the system of combination for the reasons cited in Levy namely to take into account the dislikes of the user as well as his likes and preferences. This combination falls under combining prior art elements according to known methods to yield predictable results or use of known technique to improve similar devices (methods, or products) in the same way. See MPEP 2141, KSR, 550 U.S. at 418, 82 USPQ2d at 1396. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Regarding the role of Personality see also: Han (U.S. 20120117473): “[0073] People Tags--A person defined in a user profile (and possibly stored within the contacts database) can include keyword tags that describe their personality or interests. If the person tags match any of the tags in the template, the template is assigned the people tag score. For example, if a person loves dolls, the user can add a "dolls" tag and a template with dolls will be suggested by the recommendation system.” Shillingford (U.S. 20220014352): “[0034] A user can leverage current formats and techniques for creating meaningful, curated content about themselves, their values, their interests, likes, dislikes, motivations, and other things that make up their personalities. In some embodiments, the content is enhanced by systems and methods that support a user's attestations, including systems and methods that utilize augmented and virtual reality. For example, the content may include selected writings from a wide variety of sources (e.g., documents, email, tests, blog posts, tweets, etc.), photos and videos derived from one or more platforms (e.g., smartphone, camera, social networking platform, gaming systems, etc.), as well as other representative input collected via scanning, faxing, optical character recognition or any other means to transform physical artifacts into digital representations.” Support for the “opposite” feature of this Claim is in Figure 22 of the instant Application and its description [0096]-[0101]: PNG media_image4.png 756 312 media_image4.png Greyscale Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FARIBA SIRJANI whose telephone number is (571)270-1499. The examiner can normally be reached on 9 to 5, M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pierre Desir can be reached on 571-272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Fariba Sirjani/ Primary Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Jul 13, 2023
Application Filed
Jun 16, 2025
Non-Final Rejection — §101, §103
Oct 16, 2025
Response Filed
Dec 02, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603099
SELF-ADJUSTING ASSISTANT LLMS ENABLING ROBUST INTERACTION WITH BUSINESS LLMS
2y 5m to grant Granted Apr 14, 2026
Patent 12579482
Schema-Guided Response Generation
2y 5m to grant Granted Mar 17, 2026
Patent 12572737
GENERATIVE THOUGHT STARTERS
2y 5m to grant Granted Mar 10, 2026
Patent 12537013
AUDIO-VISUAL SPEECH RECOGNITION CONTROL FOR WEARABLE DEVICES
2y 5m to grant Granted Jan 27, 2026
Patent 12492008
Cockpit Voice Recorder Decoder
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+31.0%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month