Prosecution Insights
Last updated: April 19, 2026
Application No. 18/569,291

OBJECT VIEWABILITY IN VIRTUAL ENVIRONMENTS

Non-Final OA §101§102§103
Filed
Dec 12, 2023
Examiner
HSU, JONI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
1 (Non-Final)
87%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
741 granted / 848 resolved
+25.4% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
34 currently pending
Career history
882
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
59.7%
+19.7% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
3.1%
-36.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 848 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS) submitted on May 24, 2024, September 29, 2025, and November 26, 2025 were filed after the mailing date of the application on December 12, 2023. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 and 9-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. MPEP 2106 III provides a flowchart for the subject matter eligibility test for product and processes. The claim analysis following the flowchart is as follows: Regarding Claim 1: Step 1: Is the claim to a process, machine, manufacture or composition of matter? Yes. It recites a method, which is a process. Step 2A, Prong One: Does the claim recite an abstract idea, law of nature, or nature phenomenon? Yes. Claim 1 recites: A method for determining viewability of an object by a user in a virtual environment, the method comprising: determining that a presentation of the object within the virtual environment meets a set of viewability conditions; capturing a two-dimensional projection of the object as presented in the virtual environment; determining that the two-dimensional projection of the object matches a reference version of the object based on a comparison of an average color of features of the reference version of the object and the average color of the features in the two-dimensional projection of the object; and classifying presentation of the object in the virtual environment based on whether the two-dimensional projection of the object matches the reference version of the object, including: in response to determining that the two-dimensional projection of the object matches the reference version of the object, classifying the presentation of the object within the virtual environment as a viewable rendering of the object; and in response to determining that the two-dimensional projection of the object does not match the reference version of the object classifying the presentation of the object within the virtual environment as a non-viewable rendering of the object. The presentation of the object within the virtual environment is extra-solution activity that is pre-solution activity that is incidental to the primary process that is merely a tangential addition to the claim. The steps in the claim can all be done mentally and/or through mathematical relationships and calculations. A person can look at the presentation of the object and mentally determine that it meets a set of viewability conditions; the person can look at the two-dimensional projection of the object and can mentally compare it to a reference version of the object; and the person can mentally classify presentation of the object as either a viewable rendering or a non-viewable rendering based on mentally determining whether or not the two-dimensional projection of the object matches the reference version of the object. Step 2A, Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The presentation of the object within the virtual environment is extra-solution activity that is pre-solution activity that is incidental to the primary process that is merely a tangential addition to the claim. The claim does not recite any computer elements. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. As discussed above, the presentation of the object within the virtual environment is extra-solution activity that is pre-solution activity that is incidental to the primary process that is merely a tangential addition to the claim. The claim does not recite any computer elements. Therefore, Claim 1 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 2, the Examiner points out that Claim 2 recites “The method of claim 1, wherein the set of viewability conditions comprises: validating, from one or more processors, a rendering confirmation for the object within the virtual environment.” Thus, Claim 2 requires one or more processors to validate a rendering confirmation for the object within the virtual environment, and thus cannot be mentally performed. Thus, it recites additional elements that amount to significantly more than the judicial exception. Therefore, Claim 2 is eligible subject matter. Regarding Claim 9, it depends from Claim 1 with additional limitation “wherein the set of viewability conditions comprises: determining an average luminance of the object meets a threshold luminance.” A person can mentally calculate and determine that an average luminance of the object meets a threshold luminance. Therefore, Claim 9 recites abstract idea without additional elements. No additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 9 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 10, it depends from Claim 9 with the additional limitation “wherein determining the average luminance of the object meets the threshold luminance comprises: calculating an average luminance of pixels including the object; converting the average luminance to a representative value; and comparing the representative value to a threshold luminance value.” A person can mentally calculate the average luminance, convert the average luminance; and compares the representative value to a threshold luminance value. Therefore, Claim 10 recites abstract idea without additional elements. No additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 10 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 11, it depends from Claim 1 with the additional limitation “wherein classifying presentation of the object in the virtual environment further comprises: in response to classifying the presentation of the object within the virtual environment as the viewable rendering of the object, incrementing a count of viewability of the object; determining that a number of sequential incrementations of the count of viewability of the object meets a threshold viewability count; and registering the presentation of the object.” A person can look at the object and mentally count how many times the object is mentally determined to be a viewable rendering; the person can mentally determine that the count meets a threshold viewability count; and can write down on a piece of paper to register the presentation of the object. Therefore, Claim 11 recites abstract idea without additional elements. No additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 11 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 12, it depends from Claim 1 with additional limitation “wherein classifying presentation of the object in the virtual environment further comprises: in response to classifying the presentation of the object within the virtual environment as a non-viewable rendering of the object, incrementing a count of non-viewability of the object; determining that a number of sequential incrementations of the count of non-viewability meets a threshold non-viewability count; and providing an alert regarding the non-viewability of the object.” A person can look at the object and mentally count how many times the object is mentally determined to be a non-viewable rendering; the person can mentally determine that the count meets a threshold non-viewability count; and can write down on a piece of paper to alert regarding the non-viewability of the object. Therefore, Claim 12 recites abstract idea without additional elements. No additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 12 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 13, it depends from Claim 1 with additional limitation “wherein determining that the two-dimensional projection of the object matches the reference version of the object comprises: computing a hash of the two-dimensional projection; and comparing the hash of the two-dimensional projection to the hash of the reference version of the object.” A person can mentally compute a hash; the person can mentally compare the hashes. Therefore, Claim 13 recites abstract idea without additional elements. No additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 13 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 14, it depends from Claim 13 with additional limitation “wherein computing the hash of the two-dimensional projection and of the reference version of the object comprises computing an average hash comprising: computing an average color value of at least a portion of the two-dimensional projection; encoding each pixel of the two-dimensional projection based on whether a color value of the pixel is at least the average color value; creating a bit string based on the encoded pixels; and converting the bit string to a hex value.” A person can mentally compute the average color value; the person can mentally encode each pixel by mentally representing each pixel as binary data; the person can mentally create a bit string; and the person can mentally calculate and convert the bit string to a hex value. Therefore, Claim 14 recites abstract idea without additional elements. No additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 14 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 15, it depends from Claim 14 with additional limitation “wherein determining that the two-dimensional projection of the object matches a reference version of the object comprises determining a difference between the hex value and a reference hex value representing the reference version of the object.” A person can mentally calculate a difference between the hex values. Therefore, Claim 15 recites abstract idea without additional elements. No additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 15 is not eligible subject matter under 35 U.S.C. 101. Regarding Claim 16, it depends from Claim 12 with the additional limitation “wherein determining that the two-dimensional projection of the object matches a reference version of the object comprises: identifying locations of a set of edges in the reference version of the object; searching for the locations of the set of edges in the two-dimensional projection; and comparing an average color of pixels of the locations of the edges in the two-dimensional projection to the average color of pixels of the locations of the edges in the reference version of the object.” A person can look at the object and mentally identify locations of edges; the person can look at the two-dimensional projection and mentally search of the locations of the edges; and the person can mentally calculate and compare the average color of pixels. Therefore, Claim 16 recites abstract idea without additional elements. No additional elements are recited to integrate the abstract idea into practical application or amount to significantly more than the abstract idea. Therefore, Claim 16 is not eligible subject matter under 35 U.S.C. 101. Claim 17 recites similar limitations discussed above with respect to Claim 1 but with additional elements of one or more non-transitory computer storage media that can cause one or more computers to perform operations. The one or more non-transitory computer storage media and the one or more computers are generic computer component that do not integrate the abstract ideas recited in these claims into practical application or amount to significantly more (see MPEP 2106.05(a), (b), and (f)). Claim 18 is similar in scope to Claim 17, and therefore is also rejected under 35 U.S.C. 101 under the same rationale as Claim 17. Therefore, Claims 1 and 9-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 2, 9, 10, 17, and 18 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Badichi (US 20220193548A1). As per Claim 1, Badichi teaches a method for determining viewability of an object by a user in a virtual environment (determine the viewability of the CG objects, drawn within the viewport, in the eyes of the viewer, [0004], in Virtual Reality (VR) the scene is displayed to the viewer via two screens, in VR, viewport 100 may be the two screens, [0093]), the method comprising: determining that a presentation of the object within the virtual environment meets a set of viewability conditions (viewability determination system 200 can be configured to calculate a viewability score of an object to be displayed in a viewport 100, [0110], the viewability determination system 200 may be configured to determine a first value indicative of a relative portion of the object from viewport 100, the viewability determination system 200 can determine a plurality, of points discretely distributed on the object, and for each point of the points if the point is visible to a user viewing the viewport 100, the first value can be determined by dividing the size of the sections represented by a corresponding point of the points determined to be visible, by viewport’s 100 size, [0111]); capturing a two-dimensional projection of the object as presented in the virtual environment (scene may include various Computer Graphic (GC) objects, CG objects are drawn within viewport 100, CG objects may be two-dimensional objects, [0089], viewport 100 may refer to a two-dimensional polygon onto which the three-dimensional scene is projected, [0091], [0093]); determining that the two-dimensional projection of the object matches a reference version of the object based on a comparison of an average color of features of the reference version of the object and the average color of the features in the two-dimensional projection of the object (viewability determination system 200 may be configured to determine a third value indicative of color resemblance between colors of corresponding sections of the object and desired colors for the corresponding sections, [0127], object 910, having desired colors and of scene 920 that is viewed in viewport 100 showing object 910 with the actual colors as viewed by the viewer, the color resemblance determination method includes identifying segments of object 910 with dominating desired colors, dominating desired colors is determined by calculating the average Red Green Blue (RGB) values of the pixels within each segment, and determine if all RGB values of the pixels in the segment are within a threshold distance from the average RGB values, [0129], the actual colors of segments 930, 940, and 950 (determined to have dominating desired colors) as displayed in viewport 100 are then determined and compared to the corresponding desired colors of each of the segments determined to have dominating desired colors, [0131]); and classifying presentation of the object in the virtual environment based on whether the two-dimensional projection of the object matches the reference version of the object (the measure of resemblance for the expected and actual colors for each segment is the basis for the calculation of the third value, an example may be that the actual colors of object 910 as displayed in the viewport 100 are different than the expected colors of object 910 when object 910 is rendered in scene 920 in darker colors when viewed in a night time occurring scene, [0132], in the illustrated example, the actual colors of segments 930 and 940 as seen within the viewport 100, substantially resemble the expected colors, whereas the actual colors of segment 950 as seen within the viewport 100 do not resemble the expected colors, due to a flame of the explosion that took place in the scene 920 within the area covered by segment 950, [0133]), including: in response to determining that the two-dimensional projection of the object matches the reference version of the object, classifying the presentation of the object within the virtual environment as viewable rendering of the object (if upon the third value being higher than a third high threshold, the viewability score will be calculated based on a third over-threshold-value indicative of the third value being over the third high threshold, the third over-threshold-value is one, [0136], if a specific object’s color resemblance value is more than 60% with respect to the desired colors of the specific object, the specific object may be considered to be color resembling to the desired colors, the third over-threshold-value that will be used in the viewability score calculation for the specific object will be one, [0137]); and in response to determining that the two-dimensional projection of the object does not match the reference version of the object classifying the presentation of the object within the virtual environment as a non-viewable rendering of the object (third value is calculated for objects within the viewport 100, being indicative the color resemblance of each object with respect to the desired colors of the respective object, if the third value being lower than a third low threshold, the viewability score will be calculated based on a third under-threshold-value indicative of the third value being under the third low threshold, the third under-threshold-value is zero, [0134], if a specific object’s color resemblance value is lower than 20% with respect to the desired colors of the specific object, the specific object may be considered not to be color resembling to the desired colors, the third under-threshold-value that will be used in the viewability score calculation for the specific object will be zero, [0135]). As per Claim 2, Badichi teaches wherein the set of viewability conditions comprises: validating, from one or more processors, a rendering confirmation for the object within the virtual environment (CG objects are computer-generated images, rendered by a computer, and presented to a viewer, CG objects are drawn within viewport 100, [0089], three-dimensional video game viewed on a game console display, viewport 100 is the player’s game console display, [0092], [0093]). As per Claim 9, Badichi teaches wherein the set of viewability conditions comprises: determining an average luminance of the object meets a threshold luminance (object 910, having desired colors and of scene 920 that is viewed in viewport 100 showing object 910 with the actual colors as viewed by the viewer, the color resemblance determination method includes identifying segments of object 910 with dominating desired colors, dominating desired color is determined by calculating the average RGB values of the pixels within each segment, and determine if all RGB values of the pixels in the segment are within a threshold distance from the average RGB values, wherever RGB is referred, any other color encoding scheme may be used, including YUV (luminance, chroma, violet), [0129]). As per Claim 10, Badichi teaches wherein determining the average luminance of the object meets the threshold luminance comprises: calculating an average luminance of pixels including the object; converting the average luminance to a representative value; and comparing the representative value to a threshold luminance value [0129]. As per Claim 17, Claim 17 is similar in scope to Claim 1, except that Claim 17 is directed to one or more non-transitory computer storage media encoded with computer program instructions that when executed by one or more computers cause the one or more computers to perform the method of Claim 1. Badichi teaches one or more non-transitory computer storage media encoded with computer program instructions that when executed by one or more computers cause the one or more computers to perform the method (the operations in accordance with the teachings herein may be performed by a general-purpose computer specifically configured for the desired purpose by a computer program stored in a non-transitory computer readable storage medium, [0080]). Thus, Claim 17 is rejected under the same rationale as Claim 1. As per Claim 18, Claim 18 is similar in scope to Claim 17, and therefore is rejected under the same rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1) in view of Borovikov (US010987579B1). Badichi is relied upon for the teachings as discussed above relative to Claim 2. However, Badichi does not expressly teach wherein the set of viewability conditions comprises: determining a viewing angle of the object within a field of view of the user in the virtual environment meets a threshold angle criterion with respect to a surface normal of the object in the virtual environment with respect to the field of view of the user. However, Borovikov teaches wherein the set of viewability conditions comprises: determining a viewing angle of the object within a field of view of the user in the virtual environment meets a threshold angle criterion with respect to a surface normal of the object in the virtual environment with respect to the field of view of the user (predict that an object in the 3D environment may come within a FoV based on an object’s position within a threshold angle of a FoV, col. 8, lines 26-29; virtual environment, col. 1, lines 27-28). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi so that the set of viewability conditions comprises: determining a viewing angle of the object within a field of view of the user in the virtual environment meets a threshold angle criterion with respect to a surface normal of the object in the virtual environment with respect to the field of view of the user because Borovikov suggests that this is needed in order to determine whether an object is viewable by the user (col. 8, lines 26-29). Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1) and Borovikov (US010987579B1) in view of Salter (US010955665B2). Badichi and Borovikov are relied upon for the teachings as discussed above relative to Claim 3. However, Badichi and Borovikov do not expressly teach wherein the set of viewability conditions comprises: determining that object pixels comprising the object comprise coordinates coinciding with the field of view of the user within the virtual environment. However, Salter teaches wherein the set of viewability conditions comprises: determining that object pixels comprising the object comprise coordinates coinciding with the field of view of the user within the virtual environment (determining field of view for a group of users in a coordinate system shared by the group of users, determining a common viewing location for the at least one commonly viewed virtual object within the coordinate system shared by the group of users, col. 20, lines 33-44; each pixel in the 2-D pixel area may represent a depth value of an object, col. 9, lines 32-35; optimal viewing location and perspective for shared-view virtual objects rendered for multiple users sharing a common virtual environment, col. 2, lines 52-56). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi and Borovikov so that the set of viewability conditions comprises: determining that object pixels comprising the object comprise coordinates coinciding with the field of view of the user within the virtual environment because Salter suggests that this is needed in order to determine wither an object is viewable by the user (col. 20, lines 33-44). Claim(s) 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1), Borovikov (US010987579B1), and Salter (US010955665B2) in view of Lanier (US 20160370855A1). As per Claim 5, Badichi, Borovikov, and Salter are relied upon for the teachings as discussed above relative to Claim 4. However, Badichi, Borovikov, and Salter do not expressly teach wherein the set of viewability conditions comprises: determining that one or more features of the object within the field of view of the user are not occluded by one or more other objects. However, Lanier teaches wherein the set of viewability conditions comprises: determining that one or more features of the object within the field of view of the user are not occluded by one or more other objects (if the virtual object is within the first field of view, the method proceeds to determine if the transparency of the virtual object is above a threshold, the transparency of the virtual object may indicate whether the object is to be used to occlude a portion of the real-world environment, the threshold may correspond to a transparency level below which an object will substantially occlude the real-world environment located at the location of the object as viewed through the display system, transparency levels above the threshold may allow the real-world environment to remain viewable through the virtual object when viewed through the display system, which may not allow the virtual object to occlude the real-world environment, if the transparency of the virtual object is not above the threshold, the display may not be capable of presenting the virtual object with sufficient occlusive abilities, [0042]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi, Borovikov, and Salter so that the set of viewability conditions comprises: determining that one or more features of the object within the field of view of the user are not occluded by one or more other objects because Lanier suggests that this way, if the transparency level of the virtual object is above the threshold, this allows the user to view the virtual object and also any object behind the virtual object [0042]. As per Claim 6, Badichi, Borovikov, and Salter do not teach wherein determining that the one or more features of the object within the field of view of the user are not occluded by one or more other objects comprises determining a transparency threshold is met for the one or more other objects determined to be located between the field of view of the user and the one or more features of the object. However, Lanier teaches wherein determining that the one or more features of the object within the field of view of the user are not occluded by one or more other objects comprises determining a transparency threshold is met for the one or more other objects determined to be located between the field of view of the user and the one or more features of the object (if the virtual object is within the first field of view, the method proceeds to determine if the transparency of the virtual object is above a threshold, the transparency of the virtual object may indicate whether the object is to be used to occlude a portion of the real-world environment, the threshold may correspond to a transparency level below which an object will substantially occlude the real-world environment located at the location of the object as viewed through the display system, transparency levels above the threshold may allow the real-world environment to remain viewable through the virtual object when viewed through the display system, which may not allow the virtual object to occlude the real-world environment, if the transparency of the virtual object is not above the threshold, the display may not be capable of presenting the virtual object with sufficient occlusive abilities, [0042]). This would be obvious for the reasons given in the rejection for Claim 5. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1), Borovikov (US010987579B1), Salter (US010955665B2), and Lanier (US 20160370855A1) in view of Ito (US 20190132529A1). Badichi, Borovikov, Salter, and Lanier are relied upon for the teachings as discussed above relative to Claim 6. However, Badichi, Borovikov, Salter, and Lanier do not expressly teach wherein the one or more features of the object comprise at least one corner feature of the object and a center feature of the object. However, Ito teaches wherein the one or more features of the object comprise all features of the object (calculates the coordinate range of an image-capturing position at which a target object is not occluded by other objects and the entire object can be captured, [0065]). Since Ito teaches all features of the object, this means that it includes one corner feature of the object and a center feature of the object. Thus, Ito teaches wherein the one or more features of the object comprise at least one corner feature of the object and a center feature of the object [0065]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi, Borovikov, Salter, and Lanier so that the one or more features of the object comprise at least one corner feature of the object and a center feature of the object because Ito suggests that this ensures that the entire object is viewable [0065]. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1), Borovikov (US010987579B1), and Salter (US010955665B2) in view of Lamontagne (WO 2020157738). Badichi, Borovikov, and Salter are relied upon for the teachings as discussed above relative to Claim 4. However, Badichi, Borovikov, and Salter do not teach wherein the set of viewability conditions comprises: validating a dimensionality of the object meets a threshold dimensionality comprising: determining a pixel ratio of the object pixels to on-screen pixels meets a threshold value; determining a threshold number of object pixels comprise onscreen pixels. However, Lamontagne teaches wherein the set of viewability conditions comprises: validating a dimensionality of the object meets a threshold dimensionality comprising: determining a pixel ratio of the object pixels to on-screen pixels meets a threshold value; determining a threshold number of object pixels comprise onscreen pixels (screen coverage metric means a measure related to screen coverage of a multidimensional object, on a viewport of a graphical interface, in a multidimensional digital environment, the metric can be represented as a ratio, screen coverage means a span ratio of a multidimensional object relative to the viewport displaying the multidimensional digital environment, viewport is an area that is expressed in coordinates specific to a rendering device, viewport of a graphical interface displayed on a screen would be the pixels of the screen, and the viewability of an object would be the ratio an object spans (in pixels) relative to the viewport, [0038], visibility metric means a measure related to the visibility of a multidimensional object on a viewport of a graphical interface, in a multidimensional digital environment, visibility metric means a measure of visibility of the multidimensional object on the viewport, the metric can be determined as a ratio, visibility of an object means a ratio of a multidimensional object that is visible to a user of the viewport of the multidimensional digital environment, [0039], screen coverage metric determination module 110 can be configured to determine a total number of pixels representing the objects of interest as well as total number of pixels in the viewport, module 110 can determine the screen coverage metric by dividing the number of pixels used by the object of interest by the total number of pixels in the viewport, this ratio can represent the metric of screen coverage by the object of interest, [0049], visibility metric determination module 112 that can be configured to determine the ratio of the object of interest on the view port, module 112 can determine the visibility metric by dividing the screen coverage metric by the GAP, [0050], GAP provides an estimate of the hypothetical screen area for the object of interest’s projection on the viewport, visibility metric can represent the ratio between the actual size of the object of interest that is visible on the screen and its hypothetical maximum, the visibility metric reflects not only parts of the object of interest that are obscured by other objects, but also the part of the object that is not visible on the viewport, a ratio by module 112, related to visibility of the object of interest can be determined by dividing the ratio of screen coverage by the object of interest by the calculated GAP of the object of interest, the ratio determined by module 112 can represent the visibility metric, [0051], objects of interest 302 and 304 each render 3,000 pixels of viewport 202 while object 306 renders 1,380 pixels, since the view of object of interest 304 is partially obscured by object 306, object of interest 304 renders 3,000 – 1,380 = 1,620 pixels of viewport 202, thus, if it is presumed viewport 202 renders 10,000 pixels in total, screen coverage ratio, V1, of object of interest can be determined as, [0062], V1 = pixels of object of interest ÷ total number of pixels in viewport, [0063], V1(object 302) = 3,000 ÷ 10,000 = 0.300, [0064], V1(object 304) = 1,620 ÷ 10,000 = 0.162, [0065], the metric of screen coverage can be determined as 0.3 for object of interest 302 and 0.162 for object 304, set the metric of screen coverage to be zero if the actual calculated screen coverage ratio is below a predetermined threshold, [0066]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi, Borovikov, and Salter so that the set of viewability conditions comprises: validating a dimensionality of the object meets a threshold dimensionality comprising: determining a pixel ratio of the object pixels to on-screen pixels meets a threshold value; determining a threshold number of object pixels comprise onscreen pixels because Lamontagne suggests that this way, it can accurately collect data related to visibility of the multidimensional digital object within the multidimensional digital environment [0003]. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1) in view of Hays (US 20080102947A1). Badichi is relied upon for the teachings as discussed above relative to Claim 1. However, Badichi does not teach wherein classifying presentation of the object in the virtual environment further comprises: in response to classifying the presentation of the object within the virtual environment as the viewable rendering of the object, incrementing a count of viewability of the object; determining that a number of sequential incrementations of the count of viewability of the object meets a threshold viewability count; and registering the presentation of the object. However, Hays teaches wherein classifying presentation of the object in the virtual environment further comprises: in response to classifying the presentation of the object within the virtual environment as the viewable rendering of the object, incrementing a count of viewability of the object; determining that a number of sequential incrementations of the count of viewability of the object meets a threshold viewability count; and registering the presentation of the object (track the length of time an image appears on screen even if it does not constitute an impression and aggregate the total time the gamer has been exposed to the image, thereby implementing the idea that multiple insufficient views can add up to an impression, if a player drives a car around a racetrack several times and each time sees a billboard for three seconds at the threshold angle and size, after five laps the cumulative impressions viewed by the gamer equals fifteen seconds, in this case, one impression cycle is counted, and the advertisement server is reset to start recording the next impression cycle, a Jumbotron is periodically rotating through several advertisements, as the advertisements are rotated, an impression is counted: for Advertisement A after three rotations, for Advertisement B after five rotations, and for Advertisement C after 1.5 rotations, [0086]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi so that classifying presentation of the object in the virtual environment further comprises: in response to classifying the presentation of the object within the virtual environment as the viewable rendering of the object, incrementing a count of viewability of the object; determining that a number of sequential incrementations of the count of viewability of the object meets a threshold viewability count; and registering the presentation of the object because Hays suggests that this way, the advertisers will know whether the advertisement was viewed enough times to make an impression [0083, 0086]. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1) in view of Dunn (US 20140201673A1) and Fram (US008610746B2). Badichi is relied upon for the teachings as discussed above relative to Claim 1. However, Badichi does not teach wherein classifying presentation of the object in the virtual environment further comprises: in response to classifying the presentation of the object within the virtual environment as a non-viewable rendering of the object, incrementing a count of non-viewability of the object; determining that a number of sequential incrementations of the count of non-viewability meets a threshold non-viewability count. However, Dunn teaches wherein classifying presentation of the object in the virtual environment further comprises: in response to classifying the presentation of the object within the virtual environment as a non-viewable rendering of the object, incrementing a count of non-viewability of the object; determining that a number of sequential incrementations of the count of non-viewability meets a threshold non-viewability count (predetermined thresholds may include a count of the tiles into which the images of the non-visible UI elements are drawn, [0013]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi so that classifying presentation of the object in the virtual environment further comprises: in response to classifying the presentation of the object within the virtual environment as a non-viewable rendering of the object, incrementing a count of non-viewability of the object; determining that a number of sequential incrementations of the count of non-viewability meets a threshold non-viewability count because Dunn suggests that this reduces the delay in showing the content [0039]. However, Badichi and Dunn do not teach providing an alert regarding the non-viewability of the object. However, Fram teaches in response to classifying the presentation of the object as non-viewable rendering of the object; and providing an alert regarding the non-viewability of the object (triggers certain warnings if all voxels have not been displayed or if at least a predetermined portion of the voxels have not been displayed with a certain display method (such as brightness, or other parameters), col. 6, lines 31-37). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi and Dunn to include providing an alert regarding the non-viewability of the object because Fram suggests that this way, the user knows that the image had not been displayed with a certain display method, so that the image can be adjusted to be displayed with the certain display method (col. 6, lines 31-37). Claim(s) 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1) in view of Seidemann (US 20230237479A1). As per Claim 13, Badichi is relied upon for the teachings as discussed above relative to Claim 1. However, Badichi does not teach wherein determining that the two-dimensional projection of the object matches the reference version of the object comprises: computing a hash of the two-dimensional projection; and comparing the hash of the two-dimensional projection to the hash of the reference version of the object. However, Seidemann teaches wherein determining that the two-dimensional projection of the object matches the reference version of the object comprises: computing a hash of the two-dimensional projection; and comparing the hash of the two-dimensional projection to the hash of the reference version of the object (scanning the second digital RGB image and determining the number of respectively colored pixels of the secondary color space on the substrate, by converting the determined number of respectively colored pixels for each primary color to hexadecimal numerals, by comparing the hexadecimal numerals with the hexadecimal numeral of the hash value printed on the substrate, [0087], the number of respectively colored pixels for each primary color of the authentication image may be determined, may be transformed into hexadecimal numerals and may be compared to the one-time verification number (which is the hash value), if both hexadecimal numerals are identical the authentication image is considered not to be manipulated, [0156]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi so that determining that the two-dimensional projection of the object matches the reference version of the object comprises: computing a hash of the two-dimensional projection; and comparing the hash of the two-dimensional projection to the hash of the reference version of the object because Seidemann suggests that this ensures that image matches the reference image [0087, 0165]. As per Claim 14, Badichi teaches determining that the two-dimensional projection of the object matches a reference version of the object based on a comparison of an average color of features of the reference version of the object and the average color of the features in the two-dimensional projection of the object, as discussed in the rejection for Claim 1. However, Badichi does not teach wherein computing the hash of the two-dimensional projection and of the reference version of the object comprises computing an average hash comprising: computing an average color value of at least a portion of the two-dimensional projection; encoding each pixel of the two-dimensional projection based on whether a color value of the pixel is at least the average color value; creating a bit string based on the encoded pixels; and converting the bit string to a hex value. However, Seidemann teaches wherein computing the hash of the two-dimensional projection and of the reference version of the object comprises: creating a bit string based on the encoded pixels; converting the bit string to a hex value; wherein determining that the two-dimensional projection of the object matches a reference version of the object comprises determining a difference between the hex value and a reference hex value representing the reference version of the object [0087, 0156]. Since Badichi teaches determining that the two-dimensional projection of the object matches a reference version of the object based on a comparison of an average color of features of the reference version of the object and the average color of the features in the two-dimensional projection of the object, as discussed in the rejection for Claim 1, this teaching of the hash and the hex values from Seidemann can be implemented into the device of Badichi so that computing the hash of the two-dimensional projection and of the reference version of the object comprises computing an average hash comprising: computing an average color value of at least a portion of the two-dimensional projection; encoding each pixel of the two-dimensional projection based on whether a color value of the pixel is at least the average color value; creating a bit string based on the encoded pixels; and converting the bit string to a hex value. This would be obvious for the reasons given in the rejection for Claim 13. 40. As per Claim 15, Badichi does not teach wherein determining that the two-dimensional projection of the object matches a reference version of the object comprises determining a difference between the hex value and a reference hex value representing the reference version of the object. However, Seidemann teaches wherein determining that the two-dimensional projection of the object matches a reference version of the object comprises determining a difference between the hex value and a reference hex value representing the reference version of the object [0087, 0156]. This would be obvious for the reasons given in the rejection for Claim 13. 41. Claim(s) 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 20220193548A1), Dunn (US 20140201673A1), and Fram (US008610746B2) in view of Heikkinen (US 20070024527A1). Badichi, Dunn, and Fram are relied upon for the teachings as discussed above relative to Claim 12. However, Badichi, Dunn, and Fram do not teach wherein determining that the two-dimensional projection of the object matches a reference version of the object comprises: identifying locations of a set of edges in the reference version of the object; searching for the locations of the set of edges in the two-dimensional projection; and comparing an average color of pixels of the locations of the edges in the two-dimensional projection to the average color of pixels of the locations of the edges in the reference version of the object. However, Heikkinen teaches wherein determining that the two-dimensional projection of the object matches a reference version of the object comprises: identifying locations of a set of edges in the reference version of the object; searching for the locations of the set of edges in the two-dimensional projection; and comparing an average color of pixels of the locations of the edges in the two-dimensional projection to the average color of pixels of the locations of the edges in the reference version of the object (matching the camera image with the reference image, [0090], the analysis algorithms could be based on comparing features like edges, average color of the whole image, certain regions, and so on, [0091], pick the matching criteria to be used, the criteria can include: pattern matching of certain features like edges, colors may be used as matching criteria, and not the whole image, [0099]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Badichi, Dunn, and Fram so that determining that the two-dimensional projection of the object matches a reference version of the object comprises: identifying locations of a set of edges in the reference version of the object; searching for the locations of the set of edges in the two-dimensional projection; and comparing an average color of pixels of the locations of the edges in the two-dimensional projection to the average color of pixels of the locations of the edges in the reference version of the object because Heikkinen suggests that this is an efficient way to determine that the image matches the reference image [0090, 0091, 0099]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JH /JONI HSU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Dec 12, 2023
Application Filed
Jan 16, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592028
METHODS AND DEVICES FOR IMMERSING A USER IN AN IMMERSIVE SCENE AND FOR PROCESSING 3D OBJECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586306
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MODELING OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12586260
CREATING IMAGE ENHANCEMENT TRAINING DATA PAIRS
2y 5m to grant Granted Mar 24, 2026
Patent 12581168
A METHOD FOR A MEDIA FILE GENERATING AND A METHOD FOR A MEDIA FILE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12561850
IMAGE GENERATION WITH LEGIBLE SCENE TEXT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+7.2%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 848 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month