DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment / Arguments
101 Rejection. Applicant’s amendment overcomes the 101 rejection to the claims.
Applicant’s arguments with respect to the amended claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 7, 8-13 and 17-21 are rejected under 35 U.S.C. 103 as being unpatentable over Sadek (U.S. Patent App. Pub. No. 2024/0371185) in view of Walton (U.S. Patent App. Pub. No. 2017/0365100), and further in view of Chaurasia (U.S. Patent App. Pub. No. 2020/0265649).
Regarding claim 1:
Sadek teaches: a method for determining viewability of content items (para. 12, methods for producing realistic edited video scene) in a virtual environment the method comprising:
selecting a first plurality of pixels of the content item;
determining a first plurality of color values of the first plurality of pixels of the content item;
selecting a second plurality of pixels of an image of a rendered screen of a virtual environment;
determining a second plurality of color values of the second plurality of pixels of the image of the rendered screen of the virtual environment;
comparing the first plurality of color values with the second plurality of color values; and
determining whether the content item being presented within the virtual environment is being obstructed by a visual element based on the comparison (Sadek teaches occlusion detection and processing is known, based on pixel-wise comparison of color values. See para. 53. Sadek also teaches that its teachings are beneficial for video modification/editing, such as when new content is embedded into an existing video: to make the process look natural and realistic (para. 4). Sadek gives one example of an advertisement (i.e. “content item”) to be added or rendered into a video scene (i.e. “rendered screen”) See para. 11. This teaches the above first and second plurality of pixels and color values, respectively, of a content item and rendered screen, and comparing them to determine occlusion (obstruction). However, Sadek does not specify that its videos for a virtual environment. Consider the following.
In analogous art, Walton, also relevant to viewability of content items, teaches this in the context of virtual environments (e.g. claim 1, method for generating an augmented reality (AR) image from first and second images, done so that the final result will have objects that are “correctly occluded” (para. 68)). Walton also teaches that one of the images can be that of a “rendered screen of a virtual environment” or image of a real scene, to be used in the AR applications of Walton (see paras. 112-113). Walton also teaches that videos are applicable to its AR/virtual environments (para. 219).
Re: in response to determining that the content item being presented within the virtual environment is not being obstructed by the visual element based on the comparison, associating the content item with an impression and storing a record of the impression indicating that a user has viewed the content item, consider the following. In analogous art, Chaurasia is also relevant to virtual content, and obstruction (Chaurasia, paras. 6-7, describing general background characteristics of obstructions and its impact on a user viewing an object of interest). In the event where virtual content (object of interest) is not obstructed, Chaurasia also teaches associating said content with an impression and storing a record of the impression indicating that the user has viewed the content (e.g. para. 240, “…system may further comprise a database configured to store the recorded field of view, pre-defined time intervals, pre-defined object categories, and data related to one or more objects, pre-determined angles of rotation, and speed of rotation of the user on at least one angle towards the one or more objects”. Any one of the data elements related to the one or more objects. See also Fig. 13, as another teaching of the above.
Modifying Sadek, such to include video applicability to virtual/augmented reality environments, per Walton, both references concerned with rendering, image processing, visibility and videos, and to have included the recording features of Chaurasia, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A). Additional motivation would be to store and retrieve/analyze user behavior toward objects of interest to better tailor AR content (Chaurasia, para. 11-12).
The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Additional motivation is found in the prior art to make the final result more realistic (Sadek, background); and/or to have correct occlusion in virtual scenes (Walton, background).
Regarding claim 2:
Sadek and/or Walton teach: the method of claim 1, wherein the first plurality of color values and the second plurality of color values are red, green, and blue (RGB) color value (Sadek, para. 66, RGB color values) (Walton, e.g. para. 69, 71, color values from RGB).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to make use of known colors for image processing and visibility.
Regarding claim 3:
Sadek teaches: the method of claim 1, wherein the first plurality of color values and the second plurality of color values are cyan, yellow, magenta, and black (CMYK) color values (para. 66 CMYK).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to make use of known colors for image processing and visibility.
Regarding claim 7:
Claim 7 recites: the method of claim 1, wherein the method further comprises selecting a third plurality of pixels of a second image of a second rendered screen of the virtual environment (see mapping to claim 1. Claim 7 is an embodiment whereby another modification is desired to be made, so a second rendered screen of the virtual environment (i.e. another location, or at a later time), is rendered with pixels.
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to make use of known colors for image processing and visibility.
Regarding claim 8:
Claim 8 recites: the method of claim 7, wherein the method further comprises: determining a third plurality of color values of the third plurality of pixels of the second image of the second rendered screen of the virtual environment;
comparing the first plurality of color values with the third plurality of color values; and
determining that the content item being presented within the virtual environment is being obstructed by the visual element based on the comparison (see mapping to claim 1. Claim 8 is an embodiment whereby another modification is desired to be made, so a second rendered screen of the virtual environment (i.e. another location, or at a later time), is rendered with pixels, and the same content image is sought to be added. This is obvious over the prior art and mapped in claim 1. The method is simply repeated for a new scene or at a different time).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to make use of known colors for image processing and visibility.
Regarding claim 9:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the method of claim 1, wherein the method further comprises determining that the content item is within a view frustum of the virtual environment before selecting the second plurality of pixels of the image of the rendered screen, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Walton teaches viewpoint rendering or rendering to a specific/specified viewpoint, whereby both the image related to the virtual environment (i.e. “rendered screen”) and the virtual object (i.e. content item) are taken from the same viewpoint (paras. 67-68). This corresponds to a determination that the content item is within a “view frustrum of the virtual environment”, before selecting the second plurality of pixels. Modifying the applied references, in view of Walton, to have included the above, is all of taught, suggested and obvious for one of ordinary skill, with additional motivation to facilitate viewpoint rendering and final image coherency.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 10:
Sadek teaches: the method of claim 1, wherein comparing the first plurality of color values with the second plurality of color values comprises determining that the first plurality of color values and the second plurality of color values are approximately the same (see mapping to claim 1, there is no restriction on comparison of color values. One outcome or result can be that they are approximately the same).
(alternatively, in method claims, it is the overall method steps that are given patentable weight not the intended result thereof because the intended result does not materially alter the overall method. Here, this intended result is not given patentable weight when it simply expresses the intended result of a process step (comparing step) positively recited. MPEP 2111.04.)
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s), in view of same, to have obtained the above, motivated to perform processing to arrive at realistic editing.
Regarding claim 11: see also claim 1
Walton teaches: a system for determining viewability of content items in virtual environments (claim 13, augmented reality processing system performing functions in claim 13/ the uncertainty of objects corresponds to viewability), the system comprising: a hardware processor (para. 230, system includes a processor) that is configured to. The method performed by the processor corresponds to the method of claim 1; the same rationale for rejection applies.
Regarding claim 12: see claim 2.
These claims are similar; the same rationale for rejection applies.
Regarding claim 13: see claim 3.
These claims are similar; the same rationale for rejection applies.
Regarding claim 17: see claim 7.
These claims are similar; the same rationale for rejection applies.
Regarding claim 18: see claim 8.
These claims are similar; the same rationale for rejection applies.
Regarding claim 19: see claim 9.
These claims are similar; the same rationale for rejection applies.
Regarding claim 20: see claim 10.
These claims are similar; the same rationale for rejection applies.
Regarding claim 21: see also claim 1.
Walton teaches: a non-transitory computer-readable medium containing computer executable instructions that, when executed by a processor (paras. 230-31, memory storing instructions for execution by a processor), cause the processor to perform a method for determining viewability of content items in virtual environments (e.g. method of claim 1, the uncertainty corresponding to viewability of rendered or to be rendered items), the method comprising. The method of claim 21 corresponds to the method of claim 1; the same rationale for rejection applies.
Claim(s) 4-6 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Sadek in view of Walton and Chaurasia, and further in view of Ciurea (U.S. Patent App. Pub. No. 2014/0321712),
Regarding claim 4:
It would have been obvious for one of ordinary skill in the art to have combined and modified the applied reference(-s), in view of same, to have obtained: the method of claim 1, wherein selecting the first plurality of pixels of the content item further comprises selecting a first reference pixel (Ciurea, para. 246, which teaches selecting a reference pixel for the purposes of determining visibility of corresponding pixels) of the content item (content item mapped in claim 1) and selecting pixels that are proximate to the reference pixel (Ciurea, e.g. para. 106, 186, 191, each of these teach selecting pixels proximate to reference pixel/ pixel regions. Alternatively, Sadek teaches that reference regions are known, e.g. 174, 187, region of reference frame), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Modifying the applied refs, in view of same, such to have included a reference pixel and region (proximate pixels), whether for similarity, or region mapping, or ease of processing (as opposed to pixel by pixel), is taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill.
The prior art included each element recited in claim 4, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 5:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the method of claim 4, wherein selecting the first plurality of pixels of the content item further comprises comparing color values of the reference pixel with color values of the pixels that are proximate to the reference pixel, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Ciurea teaches determining local statistics of pixel regions of interests (i.e. mean, variance). See para. 276, 279. Modifying the applied references, such to apply this to color values of pixels, as per Sadek, to be compared to other regions of, for example, another image, as also mapped in claim 1, and where region comparison is taught by Ciurea (paras. 276, 279), is all of taught, suggested and motivated by the prior art, to speed processing and to perform desired image/video analysis for modifications.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 6:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the method of claim 5, wherein selecting the first plurality of pixels of the content item further comprises determining whether to include the first reference pixel in the first plurality of pixels based on the comparison between the color values of the reference pixel with the color values of the pixels that are proximate to the reference pixel, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Ciurea teaches reference pixels, as mapped in claim 4 (e.g. para. 246). In terms of comparison of color values of the reference with pixels proximate, to determine whether to include the first reference pixel in the first plurality of pixels, Walton teaches that color comparison to make a similarity/inclusion decision is known (see para. 182, which teaches that color similarity can be used to determine pixel categorizations). Modifying the applied references, in view of same, such to use the comparison/similarity teachings to the reference pixel and plurality of pixels, as mapped in claims 1 and 4, is all of taught, suggested and motivated by the prior art, to speed processing and to perform desired image/video analysis for modifications.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention. Additional motivation includes taking advantage of region processing for regions of similar features.
Regarding claim 14: see claim 4.
These claims are similar; the same rationale for rejection applies.
Regarding claim 15: see claim 5.
These claims are similar; the same rationale for rejection applies.
Regarding claim 16: see claim 6.
These claims are similar; the same rationale for rejection applies.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
* * * * *
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Sarah Lhymn
Primary Examiner
Art Unit 2613
/Sarah Lhymn/Primary Examiner, Art Unit 2613