Prosecution Insights
Last updated: April 19, 2026
Application No. 18/370,798

CONTEXTUAL PRESENTATION OF EXTENDED REALITY CONTENT

Final Rejection §103
Filed
Sep 20, 2023
Examiner
GRAY, RYAN M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Apple Inc.
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
589 granted / 672 resolved
+25.6% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
18 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 672 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments and Remarks Applicant's arguments filed 10/15/25 have been fully considered as follows: Applicant argues: Needham is directed toward identifying and selecting placement locations for a model in an AR environment. More particularly, Needham relies on a model being selected before identifying a placement location (See, e.g., para. [0036]-[0038] of Needham. See also, e.g., Figs. 3A of Needham, reproduced below). PNG media_image1.png 193 399 media_image1.png Greyscale As the manner in which Needham identifies a placement location depends on a model already being selected, combining Needham with Doken to obtain a system in which a model is selected after a location is determined would require changing the fundamental principle of operation of Needham. (Remarks, Page 7). Example cited by Applicant Applicant’s argument is unpersuasive because, in the above example, Needham explicitly references multiple models when considering placement locations: A first AR model 41 may be presented at a first candidate location on a table 42 near where the lab tech is standing. A second AR model 43 may be presented at a second candidate location on a movable table 44 in the distance. A third AR model 45 may be presented at a third candidate location on a stool 46. A table 47 may not be indicated as a candidate location because of insufficient space or too much clutter on the table 47 (e.g. or may not be ranked high enough to be indicated). Each of the higher ranked candidate placement locations may be highlighted, may blink, or may have other annotations to clearly demark them as options. (Needham, ¶ 23) Additional Examples in Needham Applicant’s argument is unpersuasive because Needham is not limited to the single example referenced. As disclosed by Needham: PNG media_image2.png 702 529 media_image2.png Greyscale “a 3D model store 62 communicatively coupled to the room scanner 61 to store the 3D model. An AR model store 63 may store one or more AR models. For example, the AR models may include placement constraints” (¶ 28). Applicant argues: Further, as Needham is directed to identifying placement locations for a model that has already been selected, the Assignee submits that it is unclear how a person of ordinary skill in the art would be motivated to look to Doken to modify Needham. For instance, as Needham relies on placement constraints of a known AR model to identify placement locations, there would be no reasonable expectation that Needham's location selection would function if the placement constraints were no longer known (e.g., when an AR model has not yet been selected). (Remarks, Page 8). Applicant’s argument is unpersuasive because Needham does not require only model specific constraints. For example: “Pre-specified placement constraints may include typical or required placement locations such as floor, ceiling, wall, table, and/or flat surfaces” (¶ 39) “Placement constraint may also include…stability requirements, height, etc. In some embodiments the user may override placement constraints.” (¶ 39) Applicant argues: The Examiner rejected claims 14-20 under 35 U.S.C. § 103 as being obvious over Needham in view of Maeder et al. (U.S. Pub. No. 2024/0062490; hereinafter "Maeder"). The Assignee respectfully submits that Needham and Maeder do not disclose or suggest each and every limitation of these claims. Specifically, claim 14 (from which claims 15-20 depend) recite a processor configured to "evaluate the one or more images of the physical environment to determine a set of candidate presentation and viewing location pairs in an extended reality environment." The processor is configured to, for at least one of these pairs: "determine a proximity of the electronic device to the candidate viewing location" and " in accordance with a determination that the electronic device is within a predetermined proximity of the candidate viewing location, present the content item at the candidate presentation location in the extended reality environment." The Examiner points to paragraph [0026] of Needham as support for these limitations, and relies on a sentence from Needham that discusses pointing to a candidate location and saying "that one" in order to select where an AR model is displayed. This paragraph does not disclose presenting a content item in response to determining that the electronic device is within a predetermined proximity of the candidate viewing location. Maeder fails to cure this deficiency. (Remarks, Page 8). Applicant’s argument is unpersuasive because Needham considers relative device location: “In some embodiments of the system 60, the user input module 65 may include any combination of one or more 3D cameras, one or more depth cameras, one or more two-dimensional (2D) cameras, proximity sensors” (¶ 34). “Another example of a placement ranking criteria may include a proximity to the user. Nearer locations, such as candidate location 1 in FIG. 5, may be favored over more distant locations, such as candidate location 2 in FIG. 5.” (¶ 46). In the context of the user, the location of the user is equivalent to the device: “For example, the user may carry an AR display device such as a smartphone or tablet, or may have a head-worn wearable display such as glasses or a virtual reality headset” (¶ 22). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Use of indicates a limitation is not explicitly disclosed by the reference alone. Claim(s) 1-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Needham (US 2018/0096528) in view of Doken (US 2023/0259202) Claim 1 Needham discloses an electronic device, comprising: a set of cameras (Needham, ¶ 30: “The room scanner 61 may include any combination of one or more 3D cameras, one or more depth cameras, and/or one or more two-dimensional (2D) cameras”); a display (Needham, ¶ 22: “For example, the user may carry an AR display device such as a smartphone or tablet, or may have a head-worn wearable display such as glasses or a virtual reality headset.”); and a processor communicably coupled to the set of cameras and the display, the processor configured to (worn device implementing Fig.1): capture one or more images of a physical environment using the set of cameras ((Needham, ¶ 30: “or example, some embodiments of the system 60 may utilize a simultaneous location and mapping (SLAM) algorithm to map the 3D model of the space relative to one or more cameras or sensors. Although the system 60 is described in connection with a room space, other embodiments of the system 60 may be applied to outdoor or other spaces as well.”)); evaluate the one or more images of the physical environment to determine a set of candidate presentation locations in an extended reality environment that is based on the physical environment (Needham, ¶ 11, 46: “the placement ranking criteria for a candidate location in the 3D model may include, for example, one or more of: the candidate location matches a previously selected placement location for the AR model, the candidate location matches a placement constraint, the candidate location matches a user preference, a stability of the candidate location, a proximity of the candidate location to a user, a visibility of the candidate location to a user, an accessibility of the candidate location to a user, an accessibility of the candidate location to multiple users, visible breaks relative to the AR model at the candidate location, or a contextual attribute of the candidate location…. Another example of a placement ranking criteria may include a proximity to the user. Nearer locations, such as candidate location 1 in FIG. 5, may be favored over more distant locations, such as candidate location 2 in FIG. 5.”), Needham does not explicitly disclose, but Doken makes obvious each candidate presentation location of the set of candidate presentation locations having associated presentation criteria, the associated presentation criteria including perspective criteria relative to a viewing location (Doken, ¶ 105: “For example, policies may define what size, geometric properties, color, texture, pose, background, depth, types of animation, border, shading, and view from different angles of virtual objects are permitted and restricted.”); for at least one of the set of candidate presentation locations (Doken, ¶ 129: “At block 610, the control circuitry as depicted in FIG. 34 queries one or more locations and sources to identify virtual objects for display.”): identify a content item from a set of content items that satisfies the associated presentation criteria (¶ 141: “t block 620 of FIG. 6, the plurality of scored virtual objects may be ranked in order of their scores. For example, the scoring engine may calculate a score of 76 for a Gucci handbag, a score of 91 for a James Bond movie that is playing at a theater within a threshold distance from the physical surface, a score of 57 for John's Tacos which is located within a threshold distance of the physical surface, a score of 72 for a Macy's sweater sale that is 30% off and contextually related to the physical image of Macy's displayed or seen through the viewing device, and a score of 88 for a weather forecast of rain at 3:00 PM”); and present the identified content item in the extended reality environment at the candidate presentation location (Doken, ¶ 141: “The ranking and order may be used by the control circuitry in determining which virtual object to overlay on the surface.”). Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider a set of content items as claimed. One of ordinary skill in the art would have motivation to improve the display of virtual objects based contextual features in the real world (“Thus, there is a need for better systems and methods for understanding surfaces, context, and the user/viewer/participant's interest and needs, including changing needs and utilization in real time. The disclosed systems and methods enable overlaying of virtual objects in a customized manner that is likely to get the user/viewer/participant's attention and retain it.”)(Doken, ¶ 8). One of ordinary skill in the art would have had a reasonable expectation of success because Needham considers ideal placement locations in order to display a virtual object in the best location, and could be further improved by matching contextual features. Claim 2 Needham does not explicitly disclose, but Doken makes obvious wherein the associated presentation criteria further includes one or more of size criteria or environmental criteria, the environmental criteria including one or more characteristics of a location of the physical environment corresponding to the candidate presentation location (Doken, ¶ 105: “For example, policies may define what size, geometric properties, color, texture, pose, background, depth, types of animation, border, shading, and view from different angles of virtual objects are permitted and restricted.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider the claimed criteria. One of ordinary skill in the art would have motivation to improve the display of virtual objects based contextual features in the real world (“Thus, there is a need for better systems and methods for understanding surfaces, context, and the user/viewer/participant's interest and needs, including changing needs and utilization in real time. The disclosed systems and methods enable overlaying of virtual objects in a customized manner that is likely to get the user/viewer/participant's attention and retain it.”)(Doken, ¶ 8). One of ordinary skill in the art would have had a reasonable expectation of success because Needham considers ideal placement locations in order to display a virtual object in the best location, and could be further improved by matching contextual features. Claim 3 Needham does not explicitly disclose, but Doken makes obvious wherein the one or more characteristics of the environment comprise: one or more characteristics of the physical environment (Doken, ¶ 105-06: “The policies also define allowed and restricted locations and zones within the surface area where a virtual object may be overlayed…The space and location policy may identify area, sections, coordinates, dimensions, and other space- and location-related information that allows overlaying of the virtual display with no restrictions”); an absolute location of the physical environment (Doken, ¶ 105-06: “The policies also define allowed and restricted locations and zones within the surface area where a virtual object may be overlayed…The space and location policy may identify area, sections, coordinates, dimensions, and other space- and location-related information that allows overlaying of the virtual display with no restrictions”); one or more objects identified at the location of the physical environment (e.g. storefront; ¶ 104: “the virtual objects may be contextually related to the surface on which they are to be overlayed, and, in another embodiment, the virtual objects may not be contextually related. For example, when a virtual object is contextually related, if the surface on which it is to be overlayed in the virtual environment is a Macy's store, then the virtual object may be contextually related to Macy's or a product that is sold or endorsed by Macy's. If the Macy's policy allows it, then a non-contextual virtual object, i.e., a virtual object that does not relate to Macy's or any product of Macy's may be overlayed on the Macy's surface in a virtual environment.”); one or more people identified at the location of the physical environment; or one or more sounds identified at the location of the physical environment. Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider the claimed criteria. One of ordinary skill in the art would have motivation to improve the display of virtual objects based contextual features in the real world (“Thus, there is a need for better systems and methods for understanding surfaces, context, and the user/viewer/participant's interest and needs, including changing needs and utilization in real time. The disclosed systems and methods enable overlaying of virtual objects in a customized manner that is likely to get the user/viewer/participant's attention and retain it.”)(Doken, ¶ 8). One of ordinary skill in the art would have had a reasonable expectation of success because Needham considers ideal placement locations in order to display a virtual object in the best location, and could be further improved by matching contextual features. Claim 4 Needham discloses wherein identifying a content item from a set of content items that satisfies the associated presentation criteria comprises identifying a plurality of content items from the set of content items that satisfy the associated presentation criteria, and wherein presenting the identified content item in the extended reality environment at the candidate presentation location comprises: presenting a user interface in the extended reality environment for selecting between the plurality of content items; and on receipt of a selection of one of the plurality of content items from the user, present the one of the plurality of content items in the extended reality environment at the candidate presentation location (Needham, ¶ 26: “The lab technician, in either of the above examples, may then point at a candidate location with their finger and say “that one.” At that point, the other two candidate AR images may disappear and the AR model at the selected candidate location may be upgraded or enhanced to the full-color, interactive AR model. Alternatively, an embodiment of the AR system may be configured to show and auto-select only the highest-ranked (best fit) candidate. In that example, in response to the lab technician saying “OK Cal, show me a PCA Model 3 for calibration” the AR system would immediately anchor the AR model to the highest ranked location (e.g. candidate location 1), eliminating the need for the lab technician to choose a location. The user may then say “not there, somewhere else,” for example, and cause the system to show the next most highly ranked location. In either of these examples, an embodiment of the AR system advantageously eliminates the need for the lab technician to explicitly choose and express a location to place the AR model using trial and error, reducing the time to start working with the AR model.”) Claim 5 Needham discloses wherein identifying a content item from a set of content items that satisfies the associated presentation criteria comprises identifying a plurality of content items from the set of content items that satisfy the associated presentation criteria, and wherein presenting the identified content item in the extended reality environment at the candidate presentation location comprises presenting each of the plurality of content items sequentially in the extended reality environment at the candidate presentation location (Needham, ¶ 24: “Turning now to FIG. 5, an example of an embodiment of an AR system may alternatively or additionally present additional visual information to indicate candidate locations for the AR model. For example, arrows may point to candidate locations in the visible area of the space and also indicate additional candidate locations that are outside of the current view of the lab technician. A first arrow 51 may point to a first AR model 52 and the arrow 51 may be labeled with a “1” to indicate the highest ranked location. A second arrow 53 may point to a second AR model 54 and the arrow 53 may be labeled with a “2” to indicate the next highest ranked location. A third arrow 55 may point to a third AR model 56 and the arrow 55 may be labeled with a “3” to indicate the third highest ranked location.”) Claim 6 Needham discloses wherein the processor is further configured to:detect movement of the electronic device; and in response to the detection of movement of the electronic device: capture one or more new images of the physical environment using the set of cameras; and evaluate the one or more new images of the physical environment to determine a new set of candidate presentation locations (Needham, ¶ 10, 45: “The scanner 11 may also be communicatively coupled to other system components such as, for example, the location identifier 13, the location ranker 14, and/or the location indicator 15 (e.g. to track movement or changes in the space)… Another example of a placement ranking criteria may include a stability of the location (e.g. stable vs. moving locations). In one example, a location may be identified as a moving place if it moves while being captured by the room scanner 61 or while being assessed by the location ranking module 69. Another example includes a more complex algorithm such as machine vision or image recognition that may recognize likely-movable places such as the top of a chair or stool.”) Claim 7 Needham does not explicitly disclose, but Doken makes obvious wherein evaluating the one or more images of the physical environment to determine the set of candidate presentation locations comprises determining the associated presentation criteria (Doken, ¶ 105: “For example, policies may define what size, geometric properties, color, texture, pose, background, depth, types of animation, border, shading, and view from different angles of virtual objects are permitted and restricted.”); Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider the claimed criteria. One of ordinary skill in the art would have motivation to improve the display of virtual objects based contextual features in the real world (“Thus, there is a need for better systems and methods for understanding surfaces, context, and the user/viewer/participant's interest and needs, including changing needs and utilization in real time. The disclosed systems and methods enable overlaying of virtual objects in a customized manner that is likely to get the user/viewer/participant's attention and retain it.”)(Doken, ¶ 8). One of ordinary skill in the art would have had a reasonable expectation of success because Needham considers ideal placement locations in order to display a virtual object in the best location, and could be further improved by matching contextual features. Claim 8 Needham discloses wherein the electronic device is a head-mounted device (Needham, ¶ 22: “For example, the user may carry an AR display device such as a smartphone or tablet, or may have a head-worn wearable display such as glasses or a virtual reality headset.”) Claim 9 Needham does not explicitly disclose, but Doken makes obvious wherein:each content item of the set of content items is associated with one or more presentation attributes; and identifying the content item from the set of content items that satisfies the associated presentation criteria is based on the one or more associated presentation attributes of the content item (Doken, ¶ 105: “he policies may further define allowances and restrictions with respect to time of display of the virtual object, space and location of display, content of display, timing and duration of display and several other attributes of the virtual objects that are permitted and restricted. For example, policies may define what size, geometric properties, color, texture, pose, background, depth, types of animation, border, shading, and view from different angles of virtual objects are permitted and restricted.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider the claimed criteria. One of ordinary skill in the art would have motivation to improve the display of virtual objects based contextual features in the real world (“Thus, there is a need for better systems and methods for understanding surfaces, context, and the user/viewer/participant's interest and needs, including changing needs and utilization in real time. The disclosed systems and methods enable overlaying of virtual objects in a customized manner that is likely to get the user/viewer/participant's attention and retain it.”)(Doken, ¶ 8). One of ordinary skill in the art would have had a reasonable expectation of success because Needham considers ideal placement locations in order to display a virtual object in the best location, and could be further improved by matching contextual features. Claim 10 Needham does not explicitly disclose, but Doken makes obvious wherein the one or more presentation attributes comprise:a pose of a person in the content item; one or more objects in the content item; a time at which the content item was captured; an elevation at which the content item was captured; an absolute location at which the content item was captured; or one or more characteristics of the physical environment in which the content item was captured (Doken, ¶ 105: “The policies may further define allowances and restrictions with respect to time of display of the virtual object, space and location of display, content of display, timing and duration of display and several other attributes of the virtual objects that are permitted and restricted. For example, policies may define what size, geometric properties, color, texture, pose, background, depth, types of animation, border, shading, and view from different angles of virtual objects are permitted and restricted.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider the claimed criteria. One of ordinary skill in the art would have motivation to improve the display of virtual objects based contextual features in the real world (“Thus, there is a need for better systems and methods for understanding surfaces, context, and the user/viewer/participant's interest and needs, including changing needs and utilization in real time. The disclosed systems and methods enable overlaying of virtual objects in a customized manner that is likely to get the user/viewer/participant's attention and retain it.”)(Doken, ¶ 8). One of ordinary skill in the art would have had a reasonable expectation of success because Needham considers ideal placement locations in order to display a virtual object in the best location, and could be further improved by matching contextual features. Claim 11 Needham discloses wherein presenting the identified content item in the extended reality environment comprises segmenting a subject of the content item and presenting the segmented subject of the content item in the extended reality environment (Needham, ¶ 12: “For example, the scanner 11 may include object recognition and/or image recognition technology to segment, identify, categorize, and/or otherwise recognize objects in the space. For example, the scanner 11 may be able to identify planar surfaces in the space. The scanner 11 may further be able to group related surfaces together and recognize the grouped surfaces as an object such as a table, a cabinet, a chair, a stool, etc. The 3D model may be tagged or include metadata that corresponds to the recognized surfaces or objects. In addition, or alternatively, a user may be able to process the 3D model to group surfaces together as an object and/or to tag surfaces and/or objects with metadata to subsequently aid the ranking/selection of placement locations.”) Claim 12 Needham discloses wherein presenting the identified content item in the extended reality environment further comprises presenting a portion of the content item outside the segmented subject in the extended reality environment with one or more visual effects applied thereto (Needham, ¶ 37: extending from a coordinate of a polygon: “ For example, the candidate location may also indicate that the front of the AR model should point a) at the user, or b) to align with the surface it's on (e.g., lining it up with a table). In any event, an identified candidate location may include a coordinate location (e.g. recommended X, Y, Z anchor point of the AR model), an orientation (e.g. a recommended pitch, yaw, roll of the AR model—that is, rotation along the X, Y, and Z axes)”) Claim 13 Needham does not explicitly disclose, but Doken makes obvious wherein presenting the identified content item in the extended reality environment comprises adjusting one or more visual attributes of the content item to blend the identified content item into the extended reality environment (Doken, ¶ 102: “It may also vary in its attributes, such as in size, geometric properties, color, texture, pose, background, depth, types of animation, border, shading, and view from different angles.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider the claimed criteria. One of ordinary skill in the art would have motivation to improve the display of virtual objects based contextual features in the real world (“Thus, there is a need for better systems and methods for understanding surfaces, context, and the user/viewer/participant's interest and needs, including changing needs and utilization in real time. The disclosed systems and methods enable overlaying of virtual objects in a customized manner that is likely to get the user/viewer/participant's attention and retain it.”)(Doken, ¶ 8). One of ordinary skill in the art would have had a reasonable expectation of success because Needham considers ideal placement locations in order to display a virtual object in the best location, and could be further improved by matching contextual features. Claim(s) 14-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Needham (US 2018/0096528) in view of Maeder (US 2024/0062490) Claim 14 Needham discloses an electronic device, comprising: a set of cameras (Needham, ¶ 30: “The room scanner 61 may include any combination of one or more 3D cameras, one or more depth cameras, and/or one or more two-dimensional (2D) cameras”); a display (Needham, ¶ 22: “For example, the user may carry an AR display device such as a smartphone or tablet, or may have a head-worn wearable display such as glasses or a virtual reality headset.”); and a processor communicably coupled to the set of cameras and the display (worn device implementing Fig.1) , the processor configured to: capture one or more images of a physical environment using the set of cameras (Needham, ¶ 30: “or example, some embodiments of the system 60 may utilize a simultaneous location and mapping (SLAM) algorithm to map the 3D model of the space relative to one or more cameras or sensors. Although the system 60 is described in connection with a room space, other embodiments of the system 60 may be applied to outdoor or other spaces as well.”); evaluate the one or more images of the physical environment to determine a set of candidate presentation and viewing location pairs in an extended reality environment that is based on the physical environment, each of the set of candidate presentation and viewing location pairs having associated presentation criteria (Needham, ¶ 11, 46: “the placement ranking criteria for a candidate location in the 3D model may include, for example, one or more of: the candidate location matches a previously selected placement location for the AR model, the candidate location matches a placement constraint, the candidate location matches a user preference, a stability of the candidate location, a proximity of the candidate location to a user, a visibility of the candidate location to a user, an accessibility of the candidate location to a user, an accessibility of the candidate location to multiple users, visible breaks relative to the AR model at the candidate location, or a contextual attribute of the candidate location…. Another example of a placement ranking criteria may include a proximity to the user. Nearer locations, such as candidate location 1 in FIG. 5, may be favored over more distant locations, such as candidate location 2 in FIG. 5.”); for at least one of the set of candidate presentation and viewing location pairs: present an indicator in the extended reality environment at the candidate presentation location (Needham, ¶ 24: “Turning now to FIG. 5, an example of an embodiment of an AR system may alternatively or additionally present additional visual information to indicate candidate locations for the AR model. For example, arrows may point to candidate locations in the visible area of the space and also indicate additional candidate locations that are outside of the current view of the lab technician. A first arrow 51 may point to a first AR model 52 and the arrow 51 may be labeled with a “1” to indicate the highest ranked location.”); PNG media_image3.png 606 419 media_image3.png Greyscale determine a proximity of the electronic device to the candidate viewing location (Needham, ¶ 11: “for example, one or more of: the candidate location matches a previously selected placement location for the AR model, the candidate location matches a placement constraint, the candidate location matches a user preference, a stability of the candidate location, a proximity of the candidate location to a user, a visibility of the candidate location to a user, an accessibility of the candidate location to a user, an accessibility of the candidate location to multiple users, visible breaks relative to the AR model at the candidate location, or a contextual attribute of the candidate location.”); and in accordance with a determination that the electronic device is within a predetermined proximity of the candidate viewing location, present the content item at the candidate presentation location in the extended reality environment (Needham, ¶ 26: “The lab technician, in either of the above examples, may then point at a candidate location with their finger and say “that one.” At that point, the other two candidate AR images may disappear and the AR model at the selected candidate location may be upgraded or enhanced to the full-color, interactive AR model. Alternatively, an embodiment of the AR system may be configured to show and auto-select only the highest-ranked (best fit) candidate”) Needham does not explicitly disclose, but Maeder makes obvious select a content item from a set of content items that satisfies the associated presentation criteria (Maeder, ¶ 157: “(v) receive relevant objects from the context-aware object selection procedure 330 for the set of potential placement spaces”); Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to consider a set of content items as claimed. One of ordinary skill in the art would have motivation to improve the display of virtual objects based contextual features in the real world (“Placement space detection is crucial in MR, as it allows software to interact with the images of the real-world perceived by the user. Without placement space detection, added objects would lack size and light reference, thus making it impossible for software to add the objects in the user vision so that it naturally blends with the environment.”)(Maeder, ¶ 4). One of ordinary skill in the art would have had a reasonable expectation of success because Needham considers ideal placement locations in order to display a virtual object in the best location, and could be further improved by matching contextual features. Claim 15 Needham discloses wherein the processor is further configured to present the indicator based on the proximity of the electronic device to the candidate presentation location (Needham, ¶ 46: “Another example of a placement ranking criteria may include a proximity to the user. Nearer locations, such as candidate location 1 in FIG. 5, may be favored over more distant locations, such as candidate location 2 in FIG. 5….A first arrow 51 may point to a first AR model 52 and the arrow 51 may be labeled with a “1” to indicate the highest ranked location”) Claim 16 Needham discloses wherein the processor is further configured to present the indicator based on the selected content item (Needham, ¶ 24: “A first arrow 51 may point to a first AR model 52 and the arrow 51 may be labeled with a “1” to indicate the highest ranked location. A second arrow 53 may point to a second AR model 54 and the arrow 53 may be labeled with a “2” to indicate the next highest ranked location. A third arrow 55 may point to a third AR model 56 and the arrow 55 may be labeled with a “3” to indicate the third highest ranked location.”) Claim 17 Needham discloses wherein the associated presentation criteria includes one or more of:size criteria; perspective criteria relative to the candidate viewing location; or environmental criteria including one or more characteristics of a location of the physical environment corresponding to the associated candidate presentation location (Needham, ¶ 11, 46: “the placement ranking criteria for a candidate location in the 3D model may include, for example, one or more of: the candidate location matches a previously selected placement location for the AR model, the candidate location matches a placement constraint, the candidate location matches a user preference, a stability of the candidate location, a proximity of the candidate location to a user, a visibility of the candidate location to a user, an accessibility of the candidate location to a user, an accessibility of the candidate location to multiple users, visible breaks relative to the AR model at the candidate location, or a contextual attribute of the candidate location…. Another example of a placement ranking criteria may include a proximity to the user. Nearer locations, such as candidate location 1 in FIG. 5, may be favored over more distant locations, such as candidate location 2 in FIG. 5.”); Claim 18 Needham discloses wherein the one or more characteristics of the environment comprise: one or more characteristics of the physical environment; an absolute location of the physical environment; one or more objects identified at the location of the physical environment; one or more people identified at the location of the physical environment; or one or more sounds identified at the location of the physical environment (Needham, ¶ 11, 46: “the placement ranking criteria for a candidate location in the 3D model may include, for example, one or more of: the candidate location matches a previously selected placement location for the AR model, the candidate location matches a placement constraint, the candidate location matches a user preference, a stability of the candidate location, a proximity of the candidate location to a user, a visibility of the candidate location to a user, an accessibility of the candidate location to a user, an accessibility of the candidate location to multiple users, visible breaks relative to the AR model at the candidate location, or a contextual attribute of the candidate location…. Another example of a placement ranking criteria may include a proximity to the user. Nearer locations, such as candidate location 1 in FIG. 5, may be favored over more distant locations, such as candidate location 2 in FIG. 5.”); Claim 19 Needham discloses wherein evaluating the one or more images of the physical environment to determine the set of candidate presentation and viewing location pairs comprises determining the associated presentation criteria (e.g. for proximity: (Needham, ¶ 46: “Another example of a placement ranking criteria may include a proximity to the user. Nearer locations, such as candidate location 1 in FIG. 5, may be favored over more distant locations, such as candidate location 2 in FIG. 5….A first arrow 51 may point to a first AR model 52 and the arrow 51 may be labeled with a “1” to indicate the highest ranked location”) Claim 20 Needham discloses wherein the processor is further configured to: detect movement of the electronic device; and in response to the detection of movement of the electronic device: capture one or more new images of the physical environment using the set of cameras; and evaluate the new one or more images of the updated physical environment to determine a new set of candidate presentation locations (Needham, ¶ 10, 45: “The scanner 11 may also be communicatively coupled to other system components such as, for example, the location identifier 13, the location ranker 14, and/or the location indicator 15 (e.g. to track movement or changes in the space)… Another example of a placement ranking criteria may include a stability of the location (e.g. stable vs. moving locations). In one example, a location may be identified as a moving place if it moves while being captured by the room scanner 61 or while being assessed by the location ranking module 69. Another example includes a more complex algorithm such as machine vision or image recognition that may recognize likely-movable places such as the top of a chair or stool.”) Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN M GRAY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Sep 20, 2023
Application Filed
Jul 11, 2025
Non-Final Rejection — §103
Oct 15, 2025
Response Filed
Jan 13, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597216
ARTIFICIAL INTELLIGENCE VIRTUAL MAKEUP METHOD AND DEVICE USING MULTI-ANGLE IMAGE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12586252
METHOD FOR ENCODING THREE-DIMENSIONAL VOLUMETRIC DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572892
SYSTEMS AND METHODS FOR VISUALIZATION OF UTILITY LINES
2y 5m to grant Granted Mar 10, 2026
Patent 12561928
SYSTEMS AND METHODS FOR CALCULATING OPTICAL MEASUREMENTS AND RENDERING RESULTS
2y 5m to grant Granted Feb 24, 2026
Patent 12542946
REMOTE PRESENTATION WITH AUGMENTED REALITY CONTENT SYNCHRONIZED WITH SEPARATELY DISPLAYED VIDEO CONTENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+10.9%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 672 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month