Prosecution Insights
Last updated: April 19, 2026
Application No. 18/755,357

SYSTEMS AND METHODS FOR PRESENTATION OF AUGMENTED REALITY SUPPLEMENTAL CONTENT IN COMBINATION WITH PRESENTATION OF MEDIA CONTENT

Non-Final OA §103
Filed
Jun 26, 2024
Examiner
TUNG, KEE M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Adeia Guides Inc.
OA Round
1 (Non-Final)
8%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
18%
With Interview

Examiner Intelligence

Grants only 8% of cases
8%
Career Allow Rate
15 granted / 189 resolved
-54.1% vs TC avg
Moderate +11% lift
Without
With
+10.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
12 currently pending
Career history
201
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
17.8%
-22.2% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 189 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Status of Claims Claims 52-71 are currently pending in this application. Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/10/2024 and 10/4/2024 are hereby acknowledged. All references have been considered by the examiner. Initialed copies of the PTO-1449 are included in this correspondence. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2(c) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: Determining the scope and contents of the prior art. Ascertaining the differences between the prior art and the claims at issue. Resolving the level of ordinary skill in the pertinent art. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 52-55, 60-65 and 70-71 are rejected under 35 U.S.C. 103 as being unpatentable over Bar-Zeev et al. (8,576,276; IDS) in view of Curry et al. (2010/0026809). Regarding claim 52, Bar-Zeev teaches a method (e.g., A user display apparatus is provided. The user display apparatus may include a head-mounted portion with associated electrical and optical components which provide a per-user, personalized point-of-view of augmented reality content. In one approach, the augmented reality content augments or replaces video content such as on a television or other video display screen. The augmented reality content frees the user from the constraints of the video display screen by placing sound and imagery anywhere in space, perspective-correct for each user. ….. In another approach, the augmented reality image can be rendered on a vertical or horizontal surface in a static location. Bar-Zeev: c.1 L.20-24. Therefore, several approaches are disclosed to view the augmented reality (AR) content with a head-mounted display) comprising: identifying a supplemental Augmented Reality (AR) content (e.g., a scene of a man such as a cowboy standing on a ground region 1216 in a mountain setting with trees 1210 in the background, and a sky 1212 with a cloud 1214. Bar-Zeev: c.21 L.43-46 and Fig. 12B) to a user device (e.g., the video display screen 1110; Bar-Zeev: c.21 L.42-43 and Figs. 12B and 12C. The video display screen 1110 has top and bottom edges 1200 and 1204, respectively, with a width w1 and right and left edges 1202 and 1206, respectively, with a height h1 which represents an area in which images are displayed by a television, computer monitor or other display device. Bar-Zeev: c.20 L.45-49) to be displayed on an AR device of a user (e.g., FIG. 12C depicts the images of FIG. 12B as seen via a HMD device. The video display screen 1110 is seen by the user as a real-world scene through the see-through lenses of the HMD device 2. Bar-Zeev: c.22 L.38-41 and Figs. 12B-12C. The open regions 1241 and 1245 indicate where light from the video display screen enters the user’s eyes. Bar-Zeev: c.22 L.59-61); identifying a recommended viewing position for the supplemental AR content, wherein the recommended viewing position is visible to the user of the AR device (e.g., FIG. 14A depicts different perspectives and locations of different users relative to a video display screen. The video display screen 1110 depicted previously is provided from a top view in which its width w1 and thickness are seen. The video display screen 1110 extends in a vertical x-y plane, where the y- and z-axes of a Cartesian coordinate system are shown. An x-axis (not shown) extends out of the page. The z-axis is normal or perpendicular to the x-y plane. An origin of the coordinate system can be at the location of the video display screen, along a focal axis of a depth camera of the hub, or at another specified location. Bar-Zeev: c.24 L.11-21. An a1 represents an angular offset from the z-axis when a user is at a location 1400, while an angle a1 represents an angular offset from the z-axis when a user is at a location 1402. Generally, the user views the video display screen from a perspective which is based on the user's angular offset (e.g., in a horizontal y-z plane) relative to the normal axis. Similarly, a 3-D augmented reality image can be rendered from the user's perspective. For example, a 3-D augmented reality image can be rendered from the perspective of the location 1400 for a first user at that location, while, concurrently, the 3-D augmented reality image is rendered from the perspective of the location 1402 for a second user at that location. In this way, the augmented reality image is rendered in the most realistic way for each user. The perspective of a user relative to the augmented reality image is the user's point of view of the image, as illustrated by the following examples. Bar-Zeev: c.24 L.22-37. FIG. 14B depicts a 3-D augmented reality object as seen by a first user from a first perspective, where the object appears to be coming out of the video display screen. In this example, the 3-D augmented reality object 1410 is a dolphin which appears to have emerged from the video display screen 1110. The dolphin is rendered from the perspective of the user location 1400 of FIG. 14A, which is slightly right of the normal axis. In this example, assume the dolphin would appear to come straight out, directly toward a user that is located directly on the normal axis. Bar-Zeev: c.24 L.38-47. Therefore, the recommended viewing position can be any position making an angular offset a between a1 and a2 from the z-axis to view the 3D augmented reality image so that the first user can see the effect of the dolphin the come straight out toward the first user); determining that the user of the AR device is not located at the recommended viewing position (e.g., a 3-D augmented reality image can be rendered from the perspective of the location 1400 for a first user at that location, Bar-Zeev: c.24 L.29-31); based at least in part on the determining that the user of the AR device is not located at the recommended viewing position (It is obvious that the first user cannot be at the location of the second user at the same time): causing the AR device to generate for display an AR phantom body outline at the recommended viewing position that is visible to the user of the AR device (e.g., FIG. 13A depicts a video display screen with video content, where augmented reality video of a virtual audience is also provided. To enhance the feeling of being in a movie theater or other communal location, augmented reality video of one or more audience members 1300 can be provided. An example audience member 1302 is depicted from the back, as if the audience member 1302 was sitting in front of the user and viewing the video display screen 1110 with the user. The example audience member 1302 is facing the video display screen 1110 as if he or she was viewing the video display screen. Bar-Zeev: c.23 L.28-38); and based on determining that location of the user has changed to the recommended viewing position, causing the AR device to generate for display the supplemental AR content (e.g., In various embodiments, the virtual image will be adjusted to match the appropriate orientation, size and shape based on the object being replaced or the environment for which the image is being inserted into. In addition, the virtual image can be adjusted to include reflectivity and shadows. In one embodiment, HMD device 12, processing unit 4 and hub computing device 12 work together as each of the devices includes a subset of sensors that are used to obtain the data for determining where, when and how to insert the virtual images. Bar-Zeev: c.10 L.60-67 and c.11 L.1-2. An angle a1 represents an angular offset from the z-axis when a user is at a location 1400, while an angle a2 represents an angular offset from the z-axis when a user is at a location 1402. Generally, the user views the video display screen from a perspective which is based on the user's angular offset (e.g., in a horizontal y-z plane) relative to the normal axis. Similarly, a 3-D augmented reality image can be rendered from the user's perspective. For example, a 3-D augmented reality image can be rendered from the perspective of the location 1400 for a first user at that location, while, concurrently, the 3-D augmented reality image is rendered from the perspective of the location 1402 for a second user at that location. In this way, the augmented reality image is rendered in the most realistic way for each user. The perspective of a user relative to the augmented reality image is the user's point of view of the image, as illustrated by the following examples. Bar-Zeev: c.24 L.22-37). While Ba-Zeev does not explicitly teach, Curry teaches: (1_1). causing the AR device to generate for display the supplemental AR content (e.g., In FIG. 13B image interpolation is shown in the circles between the square shapes that show the 16 cameras in 1306, 1308, 1310, 1312, 1314, 1316, 1318, 1320, 1322, 1324, 1326, 1328, 1330, 1332, 1334, and 1336. The circles between the squares that represent the cameras 1306-1336 show the X-Y perspective and field of vision between the focus point and the interpolated images. Not all of the interpolated images, pictured in the circles on the periphery of the camera array circle, are labeled for the sake of simplicity. The image interpolation points between camera 1306 and 1308 are shown in 1338, 1340, and 1342. The method for constructing the interpolated perspectives in 1338, 1340, and 1342 is described further below and shown in FIG. 14. Curry: [0145] L.1-13 and Fig. 3B; reproduced below for reference. PNG media_image1.png 508 531 media_image1.png Greyscale A process for constructing the interpolated images from the vantage points illustrated in FIG. 13B, between cameras 1306 and 1308 is shown in FIG. 14. In the first interpolation view (Interpolation View 1) 1400, cameras 1306 and 1308 correspond to cameras 1404 and 1406 respectively. In FIG. 13 the three interpolated points between cameras 1306 and 1308 are shown in 1338, 1340, and 1342. Those same three interpolated points are shown in FIG. 14 in 1400 in Interpolation View 1 as viewpoints 1408, 1410, and 1412. Interpolation View 1 (1400) shows the interpolated viewpoints 1408, 1410, and 1412 as they are constructed upon completion of the intermittent interpolated perspective images. In Interpolation View 2 (1414), the first interpolation image 1422 is created using the image from camera 1418 and the image from camera 1420. In Interpolation View 3 (1424), a second interpolation image 1434 is constructed using the image from a first interpolation image 1432 and an image from a second camera 1430. In Interpolation View 4 (1436), a third interpolation image 1448 is constructed using an image from a first camera 1440 and a first interpolation image 1446. Thus, FIG. 14 illustrates how intermittent images can be created using interpolation and used to populate vantage points in between different cameras. Curry: [0147] L.1-23 and Fig. 14; reproduced below for reference. PNG media_image2.png 604 440 media_image2.png Greyscale The supplement AR content at angular position a between a1 and a2 can be generated by interpolation (of Curry) between the contents at the angular positions a1 and a2); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Curry into the teaching of Bar-Zeev so that the image (supplemental AR content) at any point between two angular positions a1 and a2 can be obtained from interpolation of the images (contents) at the two positions. Regarding claim 53, the combined teaching of Bar-Zeev and Curry teaches the method of claim 52, wherein the supplemental AR content comprises a 3D AR figure representation of an actor's body (e.g., the AR object (dolphin 1410, 1412) is occluding the video display screen 1100 the viewer at position 1400 and not occluding the viewer at 1402; Bar-Zeev: Figs. 14A-14C). Regarding claim 54, the combined teaching of Bar-Zeev and Curry teaches the method of claim 52, wherein the recommended viewing position is a first recommended viewing position of a plurality of recommended viewing positions (e.g., An a1 represents an angular offset from the z-axis when a user is at a location 1400, while an angle a1 represents an angular offset from the z-axis when a user is at a location 1402. Generally, the user views the video display screen from a perspective which is based on the user's angular offset (e.g., in a horizontal y-z plane) relative to the normal axis. Similarly, a 3-D augmented reality image can be rendered from the user's perspective. For example, a 3-D augmented reality image can be rendered from the perspective of the location 1400 for a first user at that location, while, concurrently, the 3-D augmented reality image is rendered from the perspective of the location 1402 for a second user at that location. In this way, the augmented reality image is rendered in the most realistic way for each user. The perspective of a user relative to the augmented reality image is the user's point of view of the image, as illustrated by the following examples. Bar-Zeev: c.24 L.22-37. Therefore, the recommended viewing position can be any position making an angular offset a between a1 and a2 from the z-axis to view the 3D augmented reality image so that the first user can see the effect of the dolphin the come straight out toward the first user); and wherein the method further comprises: based on determining that the location of the user has changed to any of the plurality of recommended viewing positions, causing the AR device to generate for display the supplemental AR content (e.g., A process for constructing the interpolated images from the vantage points illustrated in FIG. 13B, between cameras 1306 and 1308 is shown in FIG. 14. In the first interpolation view (Interpolation View 1) 1400, cameras 1306 and 1308 correspond to cameras 1404 and 1406 respectively. In FIG. 13 the three interpolated points between cameras 1306 and 1308 are shown in 1338, 1340, and 1342. Those same three interpolated points are shown in FIG. 14 in 1400 in Interpolation View 1 as viewpoints 1408, 1410, and 1412. Interpolation View 1 (1400) shows the interpolated viewpoints 1408, 1410, and 1412 as they are constructed upon completion of the intermittent interpolated perspective images. In Interpolation View 2 (1414), the first interpolation image 1422 is created using the image from camera 1418 and the image from camera 1420. In Interpolation View 3 (1424), a second interpolation image 1434 is constructed using the image from a first interpolation image 1432 and an image from a second camera 1430. In Interpolation View 4 (1436), a third interpolation image 1448 is constructed using an image from a first camera 1440 and a first interpolation image 1446. Thus, FIG. 14 illustrates how intermittent images can be created using interpolation and used to populate vantage points in between different cameras. Curry: [0147] L.1-23 and Fig. 14). Regarding claim 55, the combined teaching of Bar-Zeev and Curry teaches the method of claim 54, further comprising: causing the AR device to generate for display a respective AR phantom body outline at each of the plurality of recommended viewing positions (e.g., FIG. 13A depicts a video display screen with video content, where augmented reality video of a virtual audience is also provided. To enhance the feeling of being in a movie theater or other communal location, augmented reality video of one or more audience members 1300 can be provided. An example audience member 1302 is depicted from the back, as if the audience member 1302 was sitting in front of the user and viewing the video display screen 1110 with the user. The example audience member 1302 is facing the video display screen 1110 as if he or she was viewing the video display screen. Bar-Zeev: c.23 L.28-38. It is possible for the augmented reality video to depict one or more audience members as if they were sitting alongside the user, behind the user, across the room from the user, or other relative location with respect to the user. Bar-Zeev: c.23 L.43-46). Regarding claim 60, the combined teaching of Bar-Zeev and Curry teaches the method of claim 52, wherein the recommended viewing position is determined based at least in part on an angle between (a) the user and (b) a position associated with the supplemental AR content (e.g., FIG. 14A depicts different perspectives and locations of different users relative to a video display screen. The video display screen 1110 depicted previously is provided from a top view in which its width w1 and thickness are seen. The video display screen 1110 extends in a vertical x-y plane, where the y- and z-axes of a Cartesian coordinate system are shown. An x-axis (not shown) extends out of the page. The z-axis is normal or perpendicular to the x-y plane. An origin of the coordinate system can be at the location of the video display screen, along a focal axis of a depth camera of the hub, or at another specified location. Bar-Zeev: c.24 L.11-21. An a1 represents an angular offset from the z-axis when a user is at a location 1400, while an angle a1 represents an angular offset from the z-axis when a user is at a location 1402. Generally, the user views the video display screen from a perspective which is based on the user's angular offset (e.g., in a horizontal y-z plane) relative to the normal axis. Similarly, a 3-D augmented reality image can be rendered from the user's perspective. For example, a 3-D augmented reality image can be rendered from the perspective of the location 1400 for a first user at that location, while, concurrently, the 3-D augmented reality image is rendered from the perspective of the location 1402 for a second user at that location. In this way, the augmented reality image is rendered in the most realistic way for each user. The perspective of a user relative to the augmented reality image is the user's point of view of the image, as illustrated by the following examples. Bar-Zeev: c.24 L.22-37. Therefore, the recommended viewing position can be any position making an angular offset a between a1 and a2 from the z-axis to view the 3D augmented reality image so that the first user can see the effect of the dolphin the come straight out toward the first user). Regarding claim 61, the combined teaching of Bar-Zeev and Curry teaches the method of claim 60, wherein the recommended viewing position is additionally determined based at least in part on an angle between (a) the user and (b) a position associated with a main content corresponding to the supplemental AR content (e.g., A process for constructing the interpolated images from the vantage points illustrated in FIG. 13B, between cameras 1306 and 1308 is shown in FIG. 14. In the first interpolation view (Interpolation View 1) 1400, cameras 1306 and 1308 correspond to cameras 1404 and 1406 respectively. In FIG. 13 the three interpolated points between cameras 1306 and 1308 are shown in 1338, 1340, and 1342. Those same three interpolated points are shown in FIG. 14 in 1400 in Interpolation View 1 as viewpoints 1408, 1410, and 1412. Interpolation View 1 (1400) shows the interpolated viewpoints 1408, 1410, and 1412 as they are constructed upon completion of the intermittent interpolated perspective images. In Interpolation View 2 (1414), the first interpolation image 1422 is created using the image from camera 1418 and the image from camera 1420. In Interpolation View 3 (1424), a second interpolation image 1434 is constructed using the image from a first interpolation image 1432 and an image from a second camera 1430. In Interpolation View 4 (1436), a third interpolation image 1448 is constructed using an image from a first camera 1440 and a first interpolation image 1446. Thus, FIG. 14 illustrates how intermittent images can be created using interpolation and used to populate vantage points in between different cameras. Curry: [0147] L.1-23 and Fig. 14). Regarding claims 62-65 and 70-71, the claims are system claims of method claims 52-55 and 60-61 respectively. The claims are similar in scope to claims 52-55 and 60-61 respectively and they are rejected under similar rationale as claims 52-55 and 60-61 respectively. Bar-Zeev teaches a user display apparatus is provided. The user display apparatus may include a head-mounted portion with associated electrical and optical components which provide a per-user, personalized point-of-view of augmented reality content. (Bar-Zeev: c.1 L.20-24). Claim(s) 56 and 66 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bar-Zeev in view of Curry as applied to claim 52 (62) and further in view of Keeley (2001/0045960). Regarding claim 56, the combined teaching of Bar-Zeev and Curry teaches the method of claim 52, further comprising: causing the AR device to generate a text prompt directing the user to sit where the AR phantom body outline is located (see 56_1 below). While the combined teaching of Bar-Zeev and Curry does not explicitly teach, Keeley teaches: (56_1). to generate a text prompt directing the user to sit where the AR phantom body outline is located (e.g., In the first way, a particular text list box 110 is selected and dragged by means of the mouse 20 according to techniques well known in the art producing a phantom outline 120. The phantom outline 120 may be repositioned on another text list box 110 as shown by arrow 122. When it is released as shown in FIG. 7 at process block 124, then at succeeding process block 126, the program 30 moves the existing text list boxes 110 down one in the list so as to change their relative priorities. Keeley: [0059] L.2-11. Therefore, by dragging the text box with a mouse, the phantom outline is moved to the released position. Thus, the position of the first user is positioned with text box). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Keeley into the combined teaching of Bar-Zeev and Curry so that the viewing position is located by dragging a text box and released at desired position. Regarding claim 66, the claim is a system claim of method claim 56. The claim is similar in scope to claim 56 and it is rejected under similar rationale as claim 56. Claim(s) 57 and 67 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bar-Zeev in view of Curry as applied to claim 52 (62) and further in view of Weng et al. (TW201024667, machine translated). Regarding claim 57, the combined teaching of Bar-Zeev and Curry teaches the method of claim 52, further comprising: causing the AR device to generate a voice prompt directing the user to sit where the AR phantom body outline is located (see 57_1 below). While the combined teaching of Bar-Zeev and Curry does not explicitly teach, Weng teaches: (57_1). to generate a voice prompt directing the user to sit where the AR phantom body outline is located (e.g., (1)The travel path is a map displayed at the current location: The road map is directly displayed on the map of the current location, and the user can reach the target location 38 according to the path displayed on the map. The map on the map shows that the path is shorter, so that the user is far away. As shown in FIG. 6, an embodiment of a screen shown in phantom) guides the user from the current location 711 to the koala bear 37c. (2)The travel path guides the user by voice: The user is guided to the target location 38 by voice (e.g., by voice prompt: 300 meters straight and then turning right). Weng: p.9 (Desc. p.5) paras.4-8 and Fig. 6; reproduced below for reference. PNG media_image3.png 622 538 media_image3.png Greyscale Therefore, the phantom guide can be in the form of voice). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Weng into the combined teaching of Bar-Zeev and Curry so that the viewing location can be located (guide) conveniently with voice. Regarding claim 67, the claim is a system claim of method claim 57. The claim is similar in scope to claim 57 and it is rejected under similar rationale as claim 57. Claim(s) 58-59 and 68-69 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bar-Zeev in view of Curry as applied to claim 52 and further in view of Pielawa et al. (9,741,126) and Wigdor et al. (2012/0264510; IDS). Regarding claim 58, the combined teaching of Bar-Zeev and Curry teaches the method of claim 52, wherein causing the AR device to generate for display the AR phantom body outline at the recommended viewing position comprises: identifying a furniture item, wherein a position of the furniture item comprises the recommended viewing position (e.g., It is possible for the augmented reality video to depict one or more audience members as if they were sitting alongside the user, behind the user, across the room from the user, or other relative location with respect to the user. The audience members can be animated, making occasional movements (e.g. shifting from side to side, rocking in their chair, stretching), and sounds (e.g., laughing, clapping, cheering, yawning) which are common in a movie theater. Bar-Zeev: c.23 L.43-50); identifying a mesh representation of the furniture item (see 58_1 below); and overlaying the AR phantom body outline on the furniture item based on the identified mesh representation (see 58_1 and 58_2 below). While the combined teaching of Bar-Zeev and Curry does not explicitly teach, Pielawa teaches: (58_1). identifying a mesh representation of the furniture item (e.g., A method for segmenting a mesh, the method may include receiving or generating the mesh, wherein the mesh is a three dimensional surface mesh that represents a three dimensional object and comprises vertexes, edges and faces; finding, by a computerized search module, first edges of the mesh that have an edge angle below an edge angle threshold; wherein each first edge is a border of a pair of faces of the mesh and wherein an edge angle of a first edge is an angle between normals to the pair of faces; finding, by the computerized search module, first vertices of the mesh that have a negative angular defect that is below a negative angular defect threshold and have exactly one neighboring first edge; wherein each first vertex is shared by multiple faces of the mesh; wherein an angular defect of a first vertex is responsive to angles between all pairs of neighboring faces of the edges that share one of the multiple faces; finding, by the computerized search module, second edges of the mesh that link the first vertices of the mesh; clustering faces of the edge to provide first clusters by joining faces of the mesh that share and edge of the mesh that is not a first edge and is not a second edge; searching, by the computerized search module, for cutting edges out of the boundaries between the first clusters; and segmenting, by a computerized segmentation module, the mesh along the cutting edges to provide mesh segments. Pielawa: Abstract. For example, a mesh that represents a chair may be segmented to mesh segments such as legs, back, arms and base. Pielawa: c.5 L.47-49. Assuming that a mesh represents a chair is classified as including four rounded legs, a pair of rounded arms and an apertured back. Finding a match to such a chair will be more efficient that trying to find a match to the chair without knowing these classifications. Pielawa: c.5 L.54-58); It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Pielawa into the combined teaching of Bar-Zeev and Curry so that finding a match to a chair (furniture) will be more efficient with known classifications of legs, arms and back. While the combined teaching of Bar-Zeev, Curry and Pielawa does not explicitly teach, Wigdor teaches: (58_2). overlaying the AR phantom body outline on the furniture item based on the identified mesh representation (e.g., a gaming system may consider one or more characteristics of a physical object such as geometric shape, geometric size, weight and/or textile feel. One or more said characteristics may be used to match a physical object to a virtualized representation. Wigdor: [0026] L.1-5. For example, the system may recognize that physical objects 102 and 104 have a geometric shape similar to candidates 302 and 304 respectively and select candidates 302 and 304 as virtualized representations of their respective physical objects. The system may modify the appearance, such as the size and/or the perspective view of candidates 302 and 304, to more closely match the dimensions of physical objects 102 and 104. For example, as shown in FIG. 1, physical object 102 is displayed as a coat rack. The gaming system may recognize candidate 302 as a good match for physical object 102 because candidate 302 is a palm tree, and the shape of the trunk and branches of the palm tree closely resemble the shape of the coat rack. Wigdor: [0026] L.5-19. See 58_1 above for mesh representation). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of Wigdor into the combined teaching of Bar-Zeev, Curry and Pielawa so that the user can interact with the physical environment and incorporate real-world elements from the physical environment into the virtual environment. (Wigdor: [0019] L.6-8) Regarding claim 59, the combined teaching of Bar-Zeev, Curry, Pielawa and Wigdor teaches the method of claim 58, wherein the recommended viewing position is a chair (e.g., The audience members can be animated, making occasional movements (e.g. shifting from side to side, rocking in their chair, stretching), and sounds (e.g., laughing, clapping, cheering, yawning) which are common in a movie theater. Bar-Zeev: c.23 L.46-50. For example, a mesh that represents a chair may be segmented to mesh segments such as legs, back, arms and base. Pielawa: c.5 L.47-49) and wherein causing the AR device to generate for display the AR phantom body outline at the recommended viewing position further comprises providing for display the AR phantom body outline seated in the chair (It is obvious that as the supplemental viewing position is at a chair, the phantom body outline (guide) is placed on the chair). Regarding claim 68-69, the claim is a system claim of method claim 58-59 respectively. The claims are similar in scope to claim 58-59 respectively and they are rejected under similar rationale as claims 58-59 respectively. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SING-WAI WU whose telephone number is (571)270-5850. The examiner can normally be reached 9:00am - 5:30pm (Central Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SING-WAI WU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jun 26, 2024
Application Filed
Dec 27, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597174
METHOD AND APPARATUS FOR DELIVERING 5G AR/MR COGNITIVE EXPERIENCE TO 5G DEVICES
2y 5m to grant Granted Apr 07, 2026
Patent 12591304
SYSTEMS AND METHODS FOR CONTEXTUALIZED INTERACTIONS WITH AN ENVIRONMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586311
APPARATUS AND METHOD FOR RECONSTRUCTING 3D HUMAN OBJECT BASED ON MONOCULAR IMAGE WITH DEPTH IMAGE-BASED IMPLICIT FUNCTION LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12537877
MANAGING CONTENT PLACEMENT IN EXTENDED REALITY ENVIRONMENTS
2y 5m to grant Granted Jan 27, 2026
Patent 12530797
PERSONALIZED SCENE IMAGE PROCESSING METHOD, APPARATUS AND STORAGE MEDIUM
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
8%
Grant Probability
18%
With Interview (+10.6%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 189 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month