Prosecution Insights
Last updated: April 19, 2026
Application No. 18/593,604

Devices, Methods, and Graphical User Interfaces for Displaying Objects in 3D Contexts

Non-Final OA §101§103
Filed
Mar 01, 2024
Examiner
SHIH, HAOSHIAN
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Apple Inc.
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
375 granted / 545 resolved
+13.8% vs TC avg
Strong +21% interview lift
Without
With
+21.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
20 currently pending
Career history
565
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
53.1%
+13.1% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 545 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claims 2-21 are pending in this application and have been examined in response to application filed on 03/01/2024. This application is a CON of 17/492,425 10/01/2021 PAT 11922584 17/492,425 is a CON of 16/581,679 09/24/2019 PAT 11138798 16/581,679 has PRO 62/855,973 06/01/2019 16/581,679 has PRO 62/844,010 05/06/2019 Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 21 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 21 describes “a computer readable storage medium”. Further, Applicant's specification fails to explicitly define the scope of “a computer readable storage medium”. Thus, in giving the term its plain meaning (see MPEP 2111.01), the claimed “computer readable storage medium” is considered to include data signals per se. Data signals per se are not statutory as they fail to fall into one of the four statutory categories of invention. As an additional note, a non-transitory computer readable storage medium having executable programming instructions stored thereon is considered statutory, as non-transitory computer readable media excludes transitory data signals. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-21 are rejected under 35 U.S.C. 103 as being unpatentable over Shaviv et al. (US 10,347,049 B2) in view of Brahmanapalli (US 2017/0201685 A1). As to INDEPENDENT claim 2, Shaviv discloses a method, comprising: at an electronic device including a display generation component, one or more input devices, and one or more cameras: displaying, via the display generation component (col.2, l.1-8, col.3, l.11; a tablet computer with built-in camera and a touch input is disclosed), a first user interface for a … application that includes representations of a plurality of [images], including a first representation of a first [image] of the plurality of [images] (fig.5A; col.7, l.60-65; a plurality of item images are displayed); while displaying the first user interface for the … application, detecting, via the one or more input devices, a first sequence of one or more inputs including a first input corresponding to selection of the first representation of the first [image]; in response to detecting the first sequence of one or more inputs including the first input corresponding to selection of the first representation of the first [image]: ceasing display of the first user interface for the … application (fig.5A, fig.5B, col.7, 1.46-col.8, l.1-15; the user selects a first item image in the first interface to be displayed in the second interface); and displaying, via the display generation component, a second representation of the first image in a second user interface that displays an augmented reality environment that includes content of at least a portion of a field of view of the one or more cameras (fig.5B; col.5, l.25-32; the selected item image is overlaid on the live video feed). Shaviv does not expressly disclose the application is a photos application and the images are photos. In the same field of endeavor, Brahmanapalli discloses a photos application and the images are photos (fig.3A, [0007]; the user can select photos to place on a wall virtually). It would have been obvious to one of ordinary skill in the art, having the teaching of Shaviv and Brahmanapalli before him prior to the effective filling date, to modify the interactive item placement simulation taught by Shaviv to include previewing personalized photo placement interface taught by Brahmanapalli with the motivation being to aiding the user in arranging and placing photos before developing said photos. As to claim 3, the prior art as combined discloses wherein the second representation of the first photo in the second user interface replaces, in the augmented reality environment, at least a portion of a wall that is within the field of view of the one or more cameras (Brahmanapalli, fig.3A; the selected photo is displayed virtually on a real wall). As to claim 4, the prior art as combined discloses while displaying the second representation of the first photo in the second user interface that displays the augmented reality environment, displaying, via the display generation component, one or more representations of objects within a physical environment corresponding to the field of view of the one or more cameras in front of the second representation of the first photo (Shaviv, fig.5F; Brahmanapalli, fig.3A; one object image is placed Infront of another object image). As to claim 5, the prior art as combined discloses wherein the displayed second representation of the first photo overlays a respective plane in the field of view of the one or more cameras (Brahmanapalli, fig.3A; the selected photo is on a wall in an AR environment). As to claim 6, the prior art as combined discloses while displaying the second representation of the first photo in the second user interface, detecting, via the one or more input devices, a second input, corresponding to rotating the second representation of the first photo; and in response to detecting the second input, rotating the second representation of the first photo about an axis that is normal to the respective plane in the field of view of the one or more cameras over which the second representation of the first photo is displayed (Shaviv, col.2, l.15-22, col.8, l.42-44; Brahmanapalli, fig.3A; the user can rotate the item image that matches the perspective and angles of the AR environment). As to claim 7, the prior art as combined discloses detecting first movement of the electronic device that adjusts the field of view of the one or more cameras (Shaviv, col.5, l.61-67; the user changes the orientation of the device); and in response to detecting the first movement of the electronic device, adjusting the second representation of the first photo in accordance with a fixed spatial relationship between the second representation of the first photo and a respective plane in the field of view of the one or more cameras (Shaviv, col.5, l.66-col.6, l.1; the item image is regenerated with the captured orientation change). As to claim 8, the prior art as combined discloses detecting, via the one or more input devices, a second sequence of one or more inputs including a third input corresponding to selection of a first representation of a second photo displayed in the first user interface for the photos application (Shaviv, fig.5A, fig.5B, 520C; Brahmanapalli, [0040]; the user can select another item image to add); and in response to detecting the second sequence of one or more inputs including the third input corresponding to selection of the first representation, displaying, via the display generation component, a second representation of the second photo in the second user interface that includes the content of at least a portion of the field of view of the one or more cameras (Shaviv, fig.5B, 520C, fig.5C; Brahmanapalli, [0040]; a second item image is added to the AR environment). As to claim 9, the prior art as combined discloses while displaying the second representation of the first photo in the second user interface, detecting, via the one or more input devices, a fourth input for changing a position of the second representation of the first photo relative to the content of at least a portion of a field of view of the one or more cameras; and in response to detecting the fourth input for changing the position of the second representation of the first photo, moving the second representation of the first photo based on the fourth input (Shaviv, col.8, l.42-52; Brahmanapalli, [0040]; the user uses touch input to move the selected item image). As to claim 10, the prior art as combined discloses wherein the second representation of the first photo is displayed with an orientation that is perpendicular to a plane in the field of view of the one or more cameras (Brahmanapalli, fig.3A, [0028]; the item image is aligned to the x-y coordinate). As to claim 11, the prior art as combined discloses while displaying the second representation of the first photo in the second user interface, detecting, via the one or more input devices, a fifth input for changing a size of the second representation of the first photo relative to the content of at least a portion of a field of view of the one or more cameras; and in response to detecting the fifth input for changing the size of the second representation of the first photo, changing a simulated physical size of the second representation of the first photo based on the fourth input Shaviv, fig.5B; col.7, l.12-15; Brahmanapalli, fig.3A; the user can resize the selected item image using touch gestures). As to INDEPENDENT claim 12 is rejected under the same rationale addressed in the rejection of claim 2 above. As to claim 13, is rejected under the same rationale addressed in the rejection of claim 3 above. As to claim 14, is rejected under the same rationale addressed in the rejection of claim 4 above. As to claim 15, is rejected under the same rationale addressed in the rejection of claim 5 above. As to claim 16, is rejected under the same rationale addressed in the rejection of claim 6 above. As to claim 17, is rejected under the same rationale addressed in the rejection of claim 7 above. As to claim 18, is rejected under the same rationale addressed in the rejection of claim 8 above. As to claim 19, is rejected under the same rationale addressed in the rejection of claim 9 above. As to claim 20, is rejected under the same rationale addressed in the rejection of claim 10 above. As to INDEPENDENT claim 21, is rejected under the same rationale addressed in the rejection of claim 2 above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAOSHIAN SHIH whose telephone number is (571)270-1257. The examiner can normally be reached M-F 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FRED EHICHIOYA can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAOSHIAN SHIH/Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Jan 17, 2025
Response after Non-Final Action
Dec 05, 2025
Non-Final Rejection — §101, §103
Apr 08, 2026
Interview Requested
Apr 14, 2026
Examiner Interview Summary
Apr 14, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597186
SYNTHESIZING SHADOWS IN DIGITAL IMAGES UTILIZING DIFFUSION MODELS
2y 5m to grant Granted Apr 07, 2026
Patent 12591329
REDUCED-SIZE INTERFACES FOR MANAGING ALERTS
2y 5m to grant Granted Mar 31, 2026
Patent 12578832
DISTANCE-BASED USER INTERFACES
2y 5m to grant Granted Mar 17, 2026
Patent 12572325
METHOD AND DEVICE FOR PLAYING SOUND EFFECTS OF MUSIC
2y 5m to grant Granted Mar 10, 2026
Patent 12561039
GENERATIVE MODEL WITH WHITEBOARD
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
90%
With Interview (+21.0%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 545 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month