Prosecution Insights
Last updated: April 19, 2026
Application No. 18/393,809

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE STORAGE MEDIUM

Final Rejection §103
Filed
Dec 22, 2023
Examiner
GOOD JOHNSON, MOTILEWA
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Jvckenwood Corporation
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
87%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
608 granted / 831 resolved
+11.2% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
35 currently pending
Career history
866
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 831 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yamasaki et al., U.S. Patent Publication Number 2017/0309051 A1, in view of Schimke, U.S. Patent Publication Number 2015/0279117 A1, further in view of Sanger et al., U.S. Patent Publication Number 2020/0117269 A1. Regarding claim 1, Yamasaki discloses an image processing device comprising: an object information acquisition unit configured to acquire information on a first object (paragraph 0038, object feature point analyzer acquires the positions of these feature points as results of the analysis); an object detection unit configured to detect from image data the first object and a second object associated with the first object (paragraph 0035, object detection processing to detect a particular object from the image; see figure 2, physical person and cup detected); and a display mode change unit configured to change a display mode of the second object when the first object is detected and the second object is not a predetermined person (paragraph 0041, flag name indicating the state of the face is supplied thereto from the face interpretation unit, and the flag name indicating the state of the object is supplied thereto from the object interpretation unit; figure 6, display effect name), and not change the display mode of the second object when the first object is detected and the second object is a predetermined person (figures 2 and 6, depth relation between face and object, which Examiner interprets as a second object and first object of a predetermined person). However, it is noted that Yamasaki fails to specifically disclose the first object that evokes a specific emotional response from a user; second object is not a face of a predetermined person who has a specific relationship with the user, and the second object is the face of the predetermined person who the specific relationship with the user. Schimke discloses disclose (476, emotion configuration); second object is not a face of a predetermined person who has a specific relationship with the user (figure 4; profile, other), and the second object is the face of the predetermined person who the specific relationship with the user (figure 4; profile, friends, close friends, family, selected friends, gaming community, clients, contacts; paragraph 0032, database includes one or more appearance profiles; include information specifying appearance preferences and conditions for applying to objects; altering the appearance of a person or objects; relationship conditions between the user-wearer user and the person or objected whose appearance is altered and/or other conditions for determine which altered appearance should be applied); not change the display mode of the second object when the first object is detected and the second object is the face of the predetermined person who has the specific relationship with the user (paragraph 0034, specifies that a person should always be presented without any augmented appearance when the user-wearer is mom). However, it is noted that Yamasaki discloses a detected first and second object, including a person, and Schimke discloses an emotion profile, but both fail to disclose the first object that evokes a specific emotional response from a user. Sanger discloses paragraph 0110, an augmented reality system allows and end user to visualize the three-dimensional environment consisting of both real objects and virtual objects. Sanger further discloses determining if an object evokes a specific emotional state (paragraph 0114, determine if that the end user is in the specific emotional state(s)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the augmented reality detection of two or more targets that include a face as disclosed by Yamasaki, including in addition to detected objects and faces for effects, the relationship and emotion configuration as disclosed by Schimke, such that as disclosed by Yamasaki, the relationship between the face and the object interpreted and a meaning given to the relationship, can be customized and altered for relationships with a second person. It further would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the profile database as disclosed by Schimke, the emotional states as disclosed by Sanger, to take into account in VR/AR systems the user’s subconscious motivations for engaging the VR/AR system to tailor to a particular user. Regarding claim 2, Yamasaki discloses wherein the object information acquisition unit is configured to acquire display mode change information that is a piece of data in which the first object, the second object, and a changed display mode of the second object are associated with one another, and the display mode change unit is configured to change the display mode of the second object based on the display mode change information when the first object is detected (figure 6; paragraph 0043, generates the virtual objects for masking the face and the object and virtual effect corresponding to the relationship between the face and the object in accordance with the flag name for interaction; creates an image by adding the generated virtual objects and virtual effect to the image input; creates the display image in which the virtual objects individually associated with the face and the object are superimposed thereon; paragraph 0061, meaning flag for the face, a meaning flag for the object (the flag name indicating the state of the object), the positional relation between the face and the object; and a display effect name are associated with one another to be registered to the interaction model data). Regarding claim 3, Yamasaki discloses wherein the display mode change unit is configured to change the display mode of the second object to a display mode in which character information is erased in the second object (figures 2, 7, 11 and 12; also paragraph 0043, display image creation unit create the display image in which the virtual objects individually associated with the face and the object are superimposed thereon such that the face and the object are covered therewith). Regarding claim 4, it is rejected based upon similar rational as above claim 1. Yamasaki further discloses a position information processing unit configured to determine whether a position of a user is within the specific location based on position information of the user (paragraph 0042, the face position information is supplied to the map creation unit from the face detector; see also figure 3). Schimke discloses acquire information on a specific location where a user is expected to use the image processing device; first object associated with characteristics of the specific location (paragraph 0035, can specify that the family’s house appears to move and have fangs on Halloween when view by the children and their friends; appearances and condition-settings are then stored in one or more appearance profiles). Regarding claim 5, it is rejected based upon similar rational as above claim 1. Yamasaki further discloses an image processing method (paragraph 0011). Response to Arguments Applicant’s arguments, see pages 7-10, filed 10/10/2025, with respect to the rejection(s) of claim(s) 1-5 under 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Yamasaki in view of Schimke, further in view of Sanger. Applicant argues the prior art cited fails to disclose the first object that evokes a specific emotional response from a user and a second object associated with the first object; and a display mode change unit configured to change a display mode of the second object when the first object is detected and the second object is not a face of a predetermined person who has a specific relationship with the user, and not change the display mode of the second object when the first object is detected and the second object is the face of the predetermined person who has the specific relationship with the user, as recited in amended claim 1. Examine responds the prior art cited in combination Yamasaki in view of Schimke, in view of Sanger disclose a database configured to apply an effect to a user based on a specific relationship with the user and not change when the first object is detected and the second object is the face of the predetermined person who has the specific relationship with the user (Schimke paragraph 0034, specifies that a person should always be presented without any augmented appearance when the user-wearer is mom). Applicant argues the prior art cited Yamasaki fails to disclose acquire information on a specific location where a user is expected to use the image processing device; detect from image data the first object associated with characteristics of the specific location. Examiner responds Schimke discloses paragraph 0035, can specify that the family’s house appears to move and have fangs on Halloween when view by the children and their friends; appearances and condition-settings are then stored in one or more appearance profiles. Applicant argues the prior art cited fails to disclose the first object evokes a specific emotional response. Examiner responds Sanger discloses a VR/AR system that tailors to a user’s specific emotional response. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Baughman et al., U.S. Patent Number 9,891,884 B1 Baughman discloses col. 3, lines 30-32, determine the user’s response to detecting a sound, viewing a particular object, or a combination of both; col. 3, lines 2-3, having the ability to change or modify the context provides for an opportunity to change a user’s experience; col. 4, line 66 – col. 5, line 14, matching of an object into a category of objects, by machine logic based rules; col. 8, lines 44-46, determine a user’s response to a given stimulus by associating facial expressions to the relevant emotion; col. 12, lines 65-67, determines if there is a known S/R association for the stimulus and wherein there is a second S/R association of a S/R pair. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MOTILEWA . GOOD JOHNSON Primary Examiner Art Unit 2616 /MOTILEWA GOOD-JOHNSON/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Dec 22, 2023
Application Filed
Jul 11, 2025
Non-Final Rejection — §103
Oct 10, 2025
Response Filed
Jan 09, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602107
SYSTEM AND METHOD FOR DETERMINING USER INTERACTIONS WITH VISUAL CONTENT PRESENTED IN A MIXED REALITY ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602884
DISPLAY SYSTEM AND DISPLAY METHOD FOR AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12597218
EXTENDED REALITY (XR) MODELING OF NETWORK USER DEVICES VIA PEER DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12592047
Method and Apparatus for Interaction in Three-Dimensional Space, Storage Medium, and Electronic Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12573100
USER-DEFINED CONTEXTUAL SPACES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
87%
With Interview (+14.1%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 831 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month