Prosecution Insights
Last updated: April 19, 2026
Application No. 18/688,619

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §101§103
Filed
Mar 01, 2024
Examiner
GOOD JOHNSON, MOTILEWA
Art Unit
2619
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
87%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
608 granted / 831 resolved
+11.2% vs TC avg
Moderate +14% lift
Without
With
+14.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
35 currently pending
Career history
866
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
24.4%
-15.6% vs TC avg
§112
11.0%
-29.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 831 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “movement determination unit” ; “target physical object determination unit”; “virtual object behavior update unit” in claims 1-18. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim recites “a program causing a computer to execute processing”. A program fails to fall within one of the four statutory categories of a useful process, machine, manufacture, or composition of matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kawamae et al., U.S. Patent Publication Number 2022/0044482 A1, in view of Jaafar et al., U.S. Patent Number 10,380,803 B1. Regarding claim 1, Kawamae discloses an information processing apparatus comprising: a movement determination unit (movement detection processor, 83) configured to determine a movement of a physical object disposed in a real space (paragraph 0009, determines whether or not the real object is movable; paragraph 0010, recognizing that the object is a movable object by detecting that the real object has moved); a target physical object determination unit (CPU 85) configured to determine a new target physical object from among other physical objects (paragraph 0133, captured object that have been grouped are set as targets to be associated; if there are a plurality of candidates, the target is determined according to priority; paragraph 0070, one of the two chairs has been removed; when the chair is removed and the remaining chair, the stuffed animal object is arranged on the remaining chair); and a display control unit configured to display the virtual object in the real space with the new target physical object (FIG. 3C). However, it is noted that Kawamae discloses a physical object, i.e. chair, determining the chair has been moved or been removed, and determining a new target physical object i.e. another chair. Kawamae fails to specifically disclose a virtual object behavior update unit configured to determine behavior of the virtual object in relation to the new target physical object in a case where the new target physical object is determined; and display the virtual object in the real space with the determined behavior. Jaafar discloses a case where the target physical object that is a target of behavior of a virtual object moves (col. 14, lines 10-11, system 100 may manipulate the virtual model to perform these behaviors in a suitable manner); a virtual object behavior update unit configured to determine behavior of the virtual object in relation to the new target physical object in a case where the new target physical object is determined (col. 14, lines 21-24, system may automatically manipulate the 3D model of the target object based on artificial intelligence associated with the 3D model and configured to cause the 3D model to automatically behave); and display the virtual object in the real space with the determined behavior (col. 2, lines 26-30, target object within the mixed reality presentation such that the target object appears to still be present within the real-world environment, but may be made to appear to behave in a different manner within the mixed reality presentation; col. 3, lines 11-15, it may appear to a user viewing the mixed reality presentation that the target object itself moves, over a period of time, from a first location coincident with the actual location of the target object to a different location in the real-world environment; col. 3, line 64 – col. 4, line 1, mixed reality presentation may identify the additional target object while the additional target object is included within a field of view of the animated 3D modeling of the character (e.g., to simulate the character encountering, noticing, and/or recognizing the additional target object)). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the moved physical objects with virtual objects positioned as disclosed by Kawamae, the behavior of the virtual object as disclosed by Jaafar, to provide virtual objects or target objects that appear to behave in a manner such as presentation of behaviors in a real-world environment, to simulate the encountering, noticing and/or recognizing of additional target object. Regarding claim 2, Kawamae discloses wherein the target physical object determination unit retrieves a candidate to be the target physical object from among a plurality of the physical objects and determines the new target physical object from among the retrieved candidates (paragraph 0113, an AR object selection candidate may be prepared in advance; paragraph 0133, other captured objects that have been grouped are set as targets to be associated; if there are a plurality of candidates, the target is determined according to the priority). Regarding claim 3, Kawamae discloses wherein the target physical object determination unit performs filtering processing of excluding the candidate that is not capable of being set as the new target physical object (paragraph 0131, it is determined whether or not a captured object to be associated with the selected AR object is present in the current shooting space; if the captured object is present (Yes), the process proceeds to S165; if the captured object is not present (No), the process proceeds to S163). Regarding claim 4, Kawamae discloses wherein the target physical object determination unit calculates a priority for the candidate and determines the new target physical object on a basis of the calculated priority (paragraph 0133, other captured objects that have been grouped are set as targets to be associated; if there are a plurality of candidates, the target is determined according to the priority). Regarding claim 5, Kawamae discloses wherein an affordance is set for the physical object, and the target physical object determination unit sets, as the candidate, the physical object that matches affordance information set for the target physical object that has moved (paragraph 0090, a plurality of related captured object to be associated with a common AR object are grouped and registered in the captured object group data; paragraph 0125, a plurality of captured object having the same form are registered in the captured object group data as a captured object group; for example, the two chairs 18 and 19 in FIG. 3A are grouped; see also figures 18A- 18C). Jaafar discloses col. 9, lines 3-4, target objects may include predesignated objects whose characteristics are stored in a database; col. 9, lines 58-60, certain identifiable characteristic (e.g., a piece of furniture with a back and four legs may be recognized as a chair). Regarding claim 6, Kawamae discloses wherein the target physical object determination unit sets, as the candidate, the physical object satisfying a condition used for determining the target physical object that has moved (paragraph 0134, movement flag of the target captured object is referred to, and it is determined whether or not the movement flag of the target captured object is “1” (=movable)). Regarding claim 7, Kawamae discloses wherein the target physical object determination unit sets, as the candidate, the physical object to which the target physical object that has moved is substantially similar in size or height (paragraph 0124, a plurality of captured objects having the same form are registered in the captured object group data as a captured object group; figure 3C; chair moved or removed). Regarding claim 8, Kawamae discloses wherein the target physical object determination unit excludes the candidate of which a distance to the information processing apparatus is larger than a predetermined distance threshold (paragraph 0064, an offset distance is given to a specific feature point of the object for positioning; paragraph 0088, a region including a feature point, for which distance data given to a feature point is the farthest point, and excluding a captured object is registered in the background object data). Regarding claim 9, Kawamae discloses wherein the target physical object determination unit excludes the candidate that is not capable of being visually recognized by a user (paragraph 0069, when the chair is moved out of the range of the image captured by the camera, the stuffed animal object associated with the chair is also excluded from the display image). Regarding claim 10, Kawamae discloses wherein the target physical object determination unit excludes the candidate not satisfying a condition used for determining the target physical object that has moved (paragraph 0126, a plurality of states in which one captured object is deformed are registered as a captured object group; for example, the window 16 in FIG. 3A can be deformed in a closed state, an open state, a state in which the curtain is drawn; paragraph 003, it is determined whether or not the position and posture relationship between the virtual window and the viewpoint satisfies the specified conditions). Regarding claim 11, it is noted that Kawamae fails to disclose wherein the target physical object determination unit excludes the candidate for which behavior of another virtual object is set. Jaafar discloses wherein the target physical object determination unit excludes the candidate for which behavior of another virtual object is set (col. 18, lines 7-10, it may be desirable, in certain implementations, for system to be selective with respect to target objects; col. 18, lines 20-22, may virtualize objects that the animated character appears to “see” ). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the target physical object as disclosed by Kawamae, excluding candidates for which behavior such as that which is seen by an animated character is set, as disclosed by Jaafar, to be selected with respect to target objects to analyze and/or virtualize only a certain number of potential targets at a time. Regarding claim 12, Kawamae discloses wherein the target physical object determination unit sets the priority higher as a distance between a position of the virtual object and a position of the candidate is shorter (paragraph 0133, if there are a plurality of candidates, the target is determined according to the priority; paragraph 0162, 3D display reflecting the distance relationship between the AR object and the real object (that is, the front-and-back relationship viewed from the user) is performed, is it possible to perform display with a sense of depth; ). Regarding claim 13, Kawamae discloses wherein the target physical object determination unit sets the priority higher as a target physical object matches a condition used for determining the target physical object that has moved (paragraph 0124, a plurality of captured objects having the same form are registered; paragraph 0151, when the chair 18 and the chair 19 are grouped and registered, even if the first-priority chair 18 is removed, the stuffed animal object 23 can be arranged on the second-priority chair 19 with reference to the captured object group data). Regarding claim 14, Kawamae discloses wherein the target physical object determination unit sets the priority on a basis of continuity of user experience (paragraph 0147, movement history data; the position and shooting direction of the AR display device at each time are indicated by difference data with the reference point (PBase) as a starting point, movement history (MP1) at a time T1 is indicated; paragraph 0162, by grouping a plurality of captured objects, it is possible to display the AR object flexibly according to the situation in the real space, reflecting the intention of the user). Regarding claim 15, it is noted that Kawamae discloses captured object profiles in figures 18A-18C, but fails to disclose wherein the virtual object behavior update unit determines the behavior of the virtual object by a different method depending on whether or not the virtual object is a moving object. Jaafar further discloses wherein the virtual object behavior update unit determines the behavior of the virtual object by a different method depending on whether or not the virtual object is a moving object (col. 11, lines 29-35, first location coincident with a first location of teddy bear object within real-world environment (e.g., seated on the table) to a second location distinct from the first location (e.g., to a different part of the table, to the floor under the table, etc.); teddy bear object may be virtualized so as to appear to come to life, stand up from where teddy bear object is actually location, and move around the room freely). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the movement profile as disclosed by Kawamae, discloses paths to other virtual objects as disclosed by Jaafar, to keep up with ongoing targe object analysis operation in implementations attempting to virtualize as many target objects as possible. Regarding claim 16, it is noted that Kawamae discloses in FIG. 6, and a “movement profile” indicating the movement. However, Kawamae fails to disclose wherein in a case where the virtual object is the moving object, the virtual object behavior update unit calculates a movement path from a current position to a position corresponding to the new target physical object, and the display control unit move-displays the virtual object according to the calculated movement path. Jaafar discloses wherein in a case where the virtual object is the moving object, the virtual object behavior update unit calculates a movement path from a current position to a position corresponding to the new target physical object, and the display control unit move-displays the virtual object according to the calculated movement path (col. 18, line 62 – col. 19, line 1, an additional target object (i.e., soda can object 404-5) is presented within a field of view of animated character associated with virtual object; animated character implementing virtual object (i.e., the teddy bear that has appeared to come to life and now is shown to be walking around real-world environment; col. 19, lines 31-39, virtual object 504 may approach virtual object 904, and appear to drink the soda from virtual object 904; see also figure 8). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the movement profile as disclosed by Kawamae, discloses paths to other virtual objects as disclosed by Jaafar, to keep up with ongoing target object analysis operation in implementations attempting to virtualize as many target objects as possible. Regarding claim 17, wherein in a case where the virtual object is not the moving object, the virtual object behavior update unit determines a movement pattern and display pattern from a current position to a position corresponding to the new target physical object, and the display control unit displays the virtual object according to the determined movement pattern and display pattern. Regarding claim 18, Jaafar discloses wherein in a case where the virtual object is not the moving object, the virtual object behavior update unit determines the movement pattern for moving the virtual object in accordance with another virtual object (col. 11, lines 29-35, teddy bear object may be virtualized so as to appear to come to life, stand up from where teddy bear object is actually location, and move around the room freely). Regarding claim 19, it is rejected based upon similar rational as above claim 1. Kawamae further discloses an information processing method (paragraph 0010). Regarding claim 20, it is rejected based upon similar rational as above claim 1. Kawamae further discloses a program causing a computer to execute processing (paragraph 0055). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Anderson et al., U.S. Patent Publication Number 2018/0005435 Anderson discloses paragraph 0029, computing device may capture motion data from the physical object; paragraph 0030, determines virtual object position and/or behaviors based on the detected positions of the physical objects; paragraph 0031, the behavior of the virtual objects may include modification to one or more attributes of the associated virtual object; and paragraph 0033, the computing device renders a representation of the virtual camera scene using the virtual object. Burns et al., U.S. Patent Number 11,348,316 B2 Burns discloses col. 15, lines 1-2, updates the appearance, function, or interactivity of the virtual element; col. 15, lines 39-42, a user may change the location of a physical object in the real-world environment which may in turn alter the relative location of a virtual element depicted in the CGR environment with respect to the physical object; col. 16, lines 30-32, selects or changes the modality of the virtual element based on the attribute of the one or more other virtual elements. Joo, U.S. Patent Number 10,521,941 B2 Joo discloses col. 7, lines 6-17, determines the operation of the virtual image based on the position at which the virtual image is to be displayed and the type of the object. The device 1000 may determine where the virtual image is to be displayed around the object, and may determine how to make the virtual image move according to the type of the object. For example, when the virtual image is a dog image and the dog image is determined to be displayed on a sofa, the device 1000 may confirm that the object is the sofa and the dog image is to be displayed on the sofa. Also, the device 1000 may determine to display an image of a dog lying down on the sofa; Col. 8, lines 3-6, virtual image may include any type of image that is displayed around an object and is movable in a particular operation in a predetermined situation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Motilewa Good-Johnson whose telephone number is (571)272-7658. The examiner can normally be reached Monday - Friday 6am-2:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at 571-272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. MOTILEWA . GOOD JOHNSON Primary Examiner Art Unit 2616 /MOTILEWA GOOD-JOHNSON/Primary Examiner, Art Unit 2619
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Oct 28, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602107
SYSTEM AND METHOD FOR DETERMINING USER INTERACTIONS WITH VISUAL CONTENT PRESENTED IN A MIXED REALITY ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12602884
DISPLAY SYSTEM AND DISPLAY METHOD FOR AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12597218
EXTENDED REALITY (XR) MODELING OF NETWORK USER DEVICES VIA PEER DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12592047
Method and Apparatus for Interaction in Three-Dimensional Space, Storage Medium, and Electronic Apparatus
2y 5m to grant Granted Mar 31, 2026
Patent 12573100
USER-DEFINED CONTEXTUAL SPACES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
87%
With Interview (+14.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 831 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month