Prosecution Insights
Last updated: April 19, 2026
Application No. 18/636,871

Measuring Perceptibility Of Content In A Virtual Universe

Non-Final OA §103§112
Filed
Apr 16, 2024
Examiner
THERKORN, ERICA GERALDINE
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Oracle International Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
7 currently pending
Career history
7
Total Applications
across all art units

Statute-Specific Performance

§101
23.8%
-16.2% vs TC avg
§103
52.4%
+12.4% vs TC avg
§112
23.8%
-16.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 6, it recites “…selecting the target object for display of content by comparing the perceptibility score of the location to perceptibility scores of one or more other target objects in the virtual reality environment.” … The examiner interprets the claim as written to mean that the perceptibility score of a location (not an object) is being compared to the perceptibility scores of one or more target objects. The specification conflicts the claim language because the specification discloses the perceptibility score of an target object is compared to perceptibility scores of other target objects (specification para [21]). There is a conflict between the claimed subject matter and the specification disclosure. This renders the scope of the claim uncertain as inconsistency with the specification disclosure makes the claim take on an unreasonable degree of uncertainty. Reference can be made to MPEP 2173.03. For the purpose of compact prosecution and art rejection, the examiner will treat this claim as disclosed in the specification: “…selecting the target object for display of content by comparing the perceptibility score of the target object to perceptibility scores of one or more other target objects in the virtual reality environment.” Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 7, 9, 10, 12, 15, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Badichi (US 11260299 B2) in view of Asbun (US 20190191203 A1) in further view of Donnelly (US 20040222988 A1). Regarding claim 17, Badichi teaches A system comprising: at least one device including a hardware processor (“System 200 further comprises a processing resource 240. Processing resource 240 can be one or more processing units (e.g. central processing units), microprocessors, microcontrollers (e.g. microcontroller units (MCUs)) or any other computing devices or modules, including multiple and/or parallel and/or distributed processing units…,” (col 9, lines 41- 49). The processing resource reads on the hardware processor.); generating a plurality of visibility scores, the plurality of visibility scores comprising: a first visibility score computed based on the first representation of the target object in theviewport (“For this purpose, the viewability determination system 200 can be configured to calculate a viewability score of an object to be displayed in a viewport 100. The viewability score is calculated based on a first value and optionally on one or more of a second value, a third value and/or a fourth value (block 310). The viewability determination system 200 may be configured to determine a first value indicative of a relative portion of the object from viewport 100 (block 320). For this purpose, the viewability determination system 200 can determine (a) a plurality of points discretely distributed on the object, each point representing a section of the object, and (b) for each point of the points—if the point is visible to a user viewing the viewport 100. The first value can be determined by dividing the size of the sections represented by a corresponding point of the points determined to be visible, by viewport's 100 size,” (col 10, lines 6-22). The “first value” reads on a first visibility score since the value is indicative of how visible the object is to a user. The object for which a viewability score is being calculated reads on the target object. Evaluating the data by normalizing relative to the viewport's size reads on the first representation of the target object. The first value, second value, third value, and fourth value read on a plurality of visibility scores (col 11, lines 51-61; col 13, lines 13-24; col 14; lines 21-32).); a second visibility score computed based on the second representation of the target object in theviewport (“Having determined the first value, the viewability determination system 200 may be optionally configured to determine a second value indicative of relative portion of the object visible in the viewport 100. This can be made by: (a) determining a plurality of points discretely distributed on the object, each point representing a section of the object and (b) for each point of the points, if the point is visible to a user viewing the viewport 100. The second value can be calculated as the product of dividing the size of the sections represented by a corresponding point of the points determined to be visible, by the object size (block 330),” (col 11, lines 51-61). The “second value” reads on a second visibility score since the value is indicative of how visible the object is to a user. The object that the reference refers to reads on the target object. Evaluating the data by normalizing relative to the objects' size reads on the second representation of the target object.); and computing a perceptibility score for the target object, within the virtual reality environment, in relation to the location based on the plurality of visibility scores (“After block 310 (i.e. block 320 and optionally one or more of blocks 330, 340 and/or 350), the viewability determination system 200 can be further configured to calculate a viewability score of the object (block 360). The viewability score can be calculated based on the first value and optionally on one or more of the second value, third value and/or fourth value, if such values are calculated,” (col 14, lines 64-67; col 15, lines 1-3). The viewability score is a composite calculated based on multiple values and reads on perceptibility score. The first value, second value, third value, and fourth value read on the plurality of visibility scores (col 10, lines 6-22; col 11, lines 51-61; col 13, lines 13-24; col 14, lines 21-32).). Badichi does not explicitly disclose computing a first representation of the target object in a first surface of the cube map; computing a second representation of the target object in a second surface of the cube map; Additionally Badichi does not distinctly disclose that the first representation of the target object is in the first surface of the cube map; and that the second representation of the target object is in the second surface of the cube map; Asbun teaches computing a first representation of the target object in a first surface of the cube map; computing a second representation of the target object in a second surface of the cube map ( "FIG. 11 depicts an example viewport defined over several regions of a cubemap projection. If the ‘@videoType’, ‘@projection’ and ‘@layout’ combination result in a secondary content viewport laid out over several regions of the 2D projected video, multiple AdVRDs may be defined to indicate the portions of the viewport in different regions, as shown in FIG. 11. The client may consolidate the statistics for the parts of the viewport prior to reporting orientation statistics to the server," (page 13, para [0096]; Fig 11). In Fig. 11 it is clear that the parachute (mapped to target object) is on two cube surfaces. The first representation of the parachute is mapped to SC Viewport #1 (part 1) and the cube surface that it is on is mapped to a first surface of the cube map. The second representation of the parachute is mapped to SC Viewport #1 (part 2) and the cube surface that it is on is mapped to a second surface of the cube map.); Asbun teaches that the first representation of the target object is in the first surface of the cube map; and that the second representation of the target object is in the second surface of the cube map (Asbun teaches that a viewport can be on more than one different surface of a cube map. After combination, the viewport taught by Badichi is on two different surfaces of a cube map, and the method taught by Badichi is applied to a first and second surface of a cube map.); Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Asbun to Badichi. The motivation would have been to increase efficiency for rendering 360-degree content. Badichi in view of Asbun does not distinctly disclose, but Donnelly teaches computing a cube map for a location within a virtual reality environment, the virtual reality environment including a target object (“FIG. 3B shows an example "render" routine used by the FIG. 3A video game software to provide panoramic compositing allowing rich, complex 3D virtual scenes and worlds… To perform the pre-render exemplary process, a virtual cube 400 is defined within a virtual three-dimensional universe. As shown in illustrative FIG. 4A, virtual cube 400 may be defined within any realistic or fantastic scene such as, for example, the interior of a medieval cathedral. The cube 400 is used for cube mapping. A panoramic view is created using a cube map style rendering of the scene from a chosen location as shown in FIG. 4A to provide more camera freedom for pre-rendered games. This technique in one exemplary illustration keeps the viewpoint static but allows the player to look around in any direction. In more detail, an exemplary 3D scene 402 is created using any conventional 3D modeling application. The scene is rendered out in six different images as if looking through the six different faces of cube 400 with the viewpoint at the center of the cube (FIG. 3B, block 320). This produces a high-quality off-line rendered RGB or other color cube map 404 representation of the scene as shown in FIG. 4B. In exemplary illustrative embodiment, a depth map 406 of the same scene is also created based on the same six cube image faces… Once these data structures are in appropriate memory of video game system 50, the video game software renders one or more real-time objects such as animated characters into the frame buffer using the same view point and frustum parameters in one exemplary embodiment to provide a composite image (FIG. 3B, block 328). (see FIG. 4C and block 322 of FIG. 3B),” (page 3, para [0039] – [0043]; Fig 3B; Fig 4A; Fig 4B; Fig 4C). The disclosed 3D scene reads on virtual reality environment. The one or more real-time objects reads on including a target object.); Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Donnelly to Badichi in view of Asbun. The motivation would have been “to pre-render a three-dimensional scene or universe,” (Donnelly; page 1, para [0009]). Additional motivation would have been to increase efficiency for rendering 360-degree content. Regarding claims 1 and 9, they are rejected using the same citations and rationales described in the rejection of claim 17. Claim 1 additionally recites one or more non-transitory computer readable media. Badichi teaches one or more non-transitory computer readable media (Badichi; “The operations in accordance with the teachings herein may be performed by a computer specially constructed for the desired purposes or by a general-purpose computer specially configured for the desired purpose by a computer program stored in a non-transitory computer readable storage medium. The term “non-transitory” is used herein to exclude transitory, propagating signals, but to otherwise include any volatile or non-volatile computer memory technology suitable to the application,” (col 6, lines 55-63; col 7, lines 32-49).). Regarding claim 2, Badichi in view of Asbun in further view of Donnelly teaches the one or more non-transitory computer readable media of claim 1, wherein computing the first representation of the target object in the first surface of the cube map is based on information for the target object in a frame buffer corresponding to the first surface (Donnelly; “Once these data structures are in appropriate memory of video game system 50, the video game software renders one or more real-time objects such as animated characters into the frame buffer using the same view point and frustum parameters in one exemplary embodiment to provide a composite image (FIG. 3B, block 328). Such rendering may make use of the depth information 406 (e.g., through use of a conventional hardware or software-based Z-compare operation and/or collision detection) to provide hidden surface removal and other effects. This same process is repeated in the exemplary embodiment for each of the other cube-mapped faces to produce a post-composited cube map (FIG. 3B, block 330),” (page 3-4, para [0039] – [0044]; Fig 3B). The one or more real-time objects reads on target object. The composite image is mapped to first representation. Donnelly discloses rendering the objects into the frame buffer; rendering requires detailed information about the objects being rendered (such as geometry/ mesh data, etc). Donnelly discloses computing a composite image for each cube face. The same view point and frustum parameters reads on a first surface of the cube map, and this interpretation is confirmed as Donnelly discloses the process is repeated for each of the other cube-mapped faces.). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Donnelly to Badichi in view of Asbun. The motivation would have been to improve the efficiency of rendering objects in a 3D scene. Regarding claim 4, Badichi in view of Asbun in further view of Donnelly teaches the one or more non-transitory computer readable media of claim 1, wherein: the cube map comprises six images having different viewing angles of the virtual reality environment, the six images surrounding a viewpoint of a virtual entity at the location (Donnelly; “In more detail, an exemplary 3D scene 402 is created using any conventional 3D modeling application. The scene is rendered out in six different images as if looking through the six different faces of cube 400 with the viewpoint at the center of the cube (FIG. 3B, block 320). This produces a high-quality off-line rendered RGB or other color cube map 404 representation of the scene as shown in FIG. 4B. In exemplary illustrative embodiment, a depth map 406 of the same scene is also created based on the same six cube image faces (see FIG. 4C and block 322 of FIG. 3B),” (page 3, para [0039] – [0043]; Fig 3B; Fig 4A; Fig 4B; Fig 4C). The disclosed 3D scene reads on virtual reality environment.). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Donnelly to Badichi in view of Asbun. The motivation would have been "to provide more camera freedom for pre-rendered games," (page 3, para [0040]). Regarding claim 7, Badichi in view of Asbun in further view of Donnelly teaches the one or more non-transitory computer readable media of claim 1, wherein: the visibility scores are based on one or more of: obstructions (Badichi; “Having determined the first value, the viewability determination system 200 may be optionally configured to determine a second value indicative of relative portion of the object visible in the viewport 100. This can be made by: (a) determining a plurality of points discretely distributed on the object, each point representing a section of the object and (b) for each point of the points, if the point is visible to a user viewing the viewport 100. The second value can be calculated as the product of dividing the size of the sections represented by a corresponding point of the points determined to be visible, by the object size (block 330),” (col 11, lines 51-61). The second value reads on visibility score. The score is a measure of how much of the object is currently visible in the viewport. Obstructions would affect how much of the object is currently visible and consequently the result of the second value. Thus the second value is based on obstructions.) and perceptible color difference (Badichi; “The viewability determination system 200 may be optionally configured to determine a third value indicative of color resemblance between colors of one or more corresponding sections of the object and desired colors for the corresponding sections (block 340)… The measure of resemblance for the expected and actual colors for each segment is the basis for the calculation of the third value… Having shown some examples of techniques for determining the color resemblance value of an object displayed within the viewport 100, attention is drawn back to Block 340 of FIG. 3. As indicated above, a third value is calculated for one or more objects within the viewport 100, being indicative the color resemblance of each object with respect to the desired colors of the respective object,” (col 12, lines 19-67; col 13, lines 1-24; Fig 9; col 14, lines 64-67; col 15, lines 1-3). The third value reads on visibility score. From Fig 9 it is clear that the difference between the desired colors (in object 910) and the actual colors (in the scene that is viewed in the viewport) is perceptible to the human eye. In order to measure the color resemblance, the color difference must also be determined. Since the third value is a measure of the resemblance between the expected and actual colors for an object, it is based on the perceptible color difference between the expected and actual colors.). Regarding claims 10, 12, and 15, they are rejected using the same citations and rationales described in the rejections of claims 2, 4, and 7, respectively. Claims 5, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Badichi in view of Asbun in further view of Donnelly in further view of Kuivalainen (US 20210382679 A1). Regarding claim 19, Badichi in view of Asbun in further view of Donnelly fails to explicitly teach but Kuivalainen teaches the system of 17, wherein: the operations further comprise computing an audibility score comprising a value representing a level of sound emitted from the target object at the location in relation to a combined level of other sounds of the virtual reality environment at the location; and computing the perceptibility score for the target object using the audibility score (Kuivalainen; “In step 302, a loudness difference between a loudness of the audio content and a loudness of the background noise is determined. For example, the audio content may be outputted (e.g., generated) by a speaker, and the background noise may be detected in an environment of the speaker. In accordance with this example, the background noise may be sounds other than the audio content that are detected in the environment…. The determination logic 732 may determine the loudness difference by subtracting loudness of the time-averaged noise signal 722 from the loudness that is indicated by the loudness indicator 726,” (page 6, para [0041] - [0044]). The loudness difference between the audio content and the background noise reads on the claimed audibility score. After Badichi, Asbun, and Donnelly are combined with Kuivalainen, Kuivalainen’s speaker (audio content) is replaced by Badichi’s advertisement (potential audio content), mapped to the “target object,” and an advertisement often speaks or is audible; the disclosed “loudness of the background noise” becomes that of Badichi’s virtual environment, which is the combined level of other sounds of the virtual reality environment. Badichi; “In some cases, the object is content to be displayed on the viewport. In some cases, the content is an advertisement,” (Badichi; col 4, lines 27-29). Badichi's viewability score (reads on perceptibility score) is a composite calculated based on multiple values; after combination Badichi's viewability score uses the difference between the audio content and the background noise (reads on audibility score).). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Kuivalainen to Badichi in view of Asbun in further view of Donnelly. The motivation would have been to "increase (e.g., optimize) sound quality of the audio content," (Kuivalainen; page 2, para [0018]). Regarding claims 5 and 13, they are rejected using the same citations and rationales described in the rejection of claim 19. Claims 6, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Badichi in view of Asbun in further view of Donnelly in further view of Leppanen (EP 3073352 A2). Regarding claim 20, Badichi in view of Asbun in further view of Donnelly fails to explicitly teach but Leppanen teaches the system of 17, wherein the operations further comprise: selecting the target object for display of content by comparing the perceptibility score of the location to perceptibility scores of one or more other target objects in the augmented reality environment ( For the purpose of compact prosecution and art rejection, the examiner will treat this claim as disclosed in the specification: “…selecting the target object for display of content by comparing the perceptibility score of the target object to perceptibility scores of one or more other target objects in the virtual reality environment.” Leppanen teaches ranking and selecting target objects based on visibility score, stating “In general, objects, for example real-life objects, may be ranked based on their predicted visibility, and at least one placement object may be selected based at least in part on the ranking. The selected placement object may then be used to display an augmented reality information object, such as for example an information object as described above. The ranking may be based on assigning a visibility score to at least two objects, for example,” (page 9, para [0038]). “…the augmented reality information element comprises at least one of: a traffic bulletin, a personal message, an advertisement, a general public announcement and a status message relating to a device…,” (page 2-3, para [0009]). The placement object reads on target object. After Badichi, Asbun, and Donnelly are combined with Leppanen, Leppanen’s technique of ranking and selecting based on visibility/perceptibility score is used to rank and select Badichi’s target object in Badichi’s virtual reality environment.). Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Leppanen to Badichi in view of Asbun in further view of Donnelly. The motivation would have been to improve the effectiveness of messages displayed in virtual reality. Regarding claims 6 and 14, they are rejected using the same citations and rationales described in the rejection of claim 20. Allowable Subject Matter Claims 3, 8, 11, 16 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERICA G THERKORN whose telephone number is (571)272-2939. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ERICA G THERKORN/Examiner, Art Unit 2618 /DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

Apr 16, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §103, §112 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month