Prosecution Insights
Last updated: April 19, 2026
Application No. 18/566,350

METHOD FOR GENERATING A 3D-HEATMAP FOR DISPLAYING A USER'S INTEREST IN A VIRTUAL THREE-DIMENSIONAL OBJECT

Non-Final OA §103§112
Filed
Dec 01, 2023
Examiner
HE, YINGCHUN
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Fectar B V
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
529 granted / 644 resolved
+20.1% vs TC avg
Moderate +14% lift
Without
With
+14.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
27 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
54.0%
+14.0% vs TC avg
§102
5.4%
-34.6% vs TC avg
§112
17.9%
-22.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 644 resolved cases

Office Action

§103 §112
DETAILED ACTION *Note in the following document: 1. Texts in italic bold format are limitations quoted either directly or conceptually from claims/descriptions disclosed in the instant application. 2. Texts in regular italic format are quoted directly from cited reference or Applicant’s arguments. 3. Texts with underlining are added by the Examiner for emphasis. 4. Texts with 5. Acronym “PHOSITA” stands for “Person Having Ordinary Skill In The Art”. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. Claim elements in this application that use the word “means” (or “step for”) are presumed to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “means” (or “step for”) are presumed not to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. The limitation of Claim 1, 3, 7, 10-11 and 13 that recite(s) display means is/are not being treated in accordance with 35 U.S.C. 112(f) because the claimed function is modified by specific structure “display” that performs the function. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) more particular “control means” recited in Claim 1 and 12-13 are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Objections Claim 1, 4 and 8 are objected to because of the following informalities: Claim 1, 4 and 8 use symbols “-“ and “o”. Suggest deleting those unnecessary symbols. Claim 1 further recites said virtual 3D-environment. Suggest replacing said with “the”. Furthermore, Claim 4 comprises two sentences (see lines 11-12: storing for each calculated point of intersection the accumulated amount of time multiplied by a number that depends on the half cone angle.) Suggest replacing the “.” with “;”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 3 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 3, the phrase "for example" renders the claim indefinite because it is unclear whether the limitation(s) following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3 and 11-13 are rejected under 35 U.S.C. 103 as being unpatentable over Mizuno (US 2021/0217223 A1) in view of Takahashi et al. (A system for three-dimensional gaze fixation analysis using eye tracking glasses, Journal of Computational Design and Engineering 5 (2018) 449-457). Regarding Claim 1, Mizuno teaches or suggests Method ([0014]: FIG. 2 is a view for describing an example of a method of detecting a visual observation region by a viewer) for generating a 3D-heatrnap ([0001]: The present invention relates to a heat map presentation device and a heat map presentation program, and particularly, a device that analyzes a viewing situation of a three-dimensional image by a viewer and presents the viewing situation as a heat map) for displaying a user's interest in a virtual 3D-object ([0009]: As described above, when a user's gazing point on the VR content is analyzed and is displayed with a heat map, it is possible to understand a region that a lot of users are viewing in the VR content), which method comprises the steps of: -generating a virtual 3D-environment with a virtual 3D-object positioned in said virtual 3D-environment ([0024]: Each of the reproduction devices 301 reproduces a three-dimensional image as the VR content) -providing a virtual camera movable within said virtual 3D-environment ([0064]: the user can change the user's virtual existing position in the three-dimensional space to various directions other than a front side when viewed from the heat map); -providing display means arranged in the real world for displaying the images captured by the virtual camera ([0024]: Each of the reproduction devices 301 reproduces a three-dimensional image as the VR content. The three-dimensional image is a parallax image for creating a virtual three-dimensional space of VR, and is a moving image of which content varies with the passage of time. For example, the three-dimensional image is called a 360-degree moving image in which 360-degree all directions are allowed to be viewed in accordance with movement of a viewer's sight line. The three-dimensional image reproduced by the reproduction device 301 is displayed by the HMD 302 worn by the viewer); -providing control means arranged in the real world for controlling, within the virtual 3D-environment, the movement of the virtual camera by a user ([0024]: Each of the reproduction devices 301 reproduces a three-dimensional image as the VR content. The three-dimensional image is a parallax image for creating a virtual three-dimensional space of VR, and is a moving image of which content varies with the passage of time. For example, the three-dimensional image is called a 360-degree moving image in which 360-degree all directions are allowed to be viewed in accordance with movement of a viewer's sight line. The three-dimensional image reproduced by the reproduction device 301 is displayed by the HMD 302 worn by the viewer); By wherein -during a time span in which movement of the virtual camera is controlled by the user ([0055]: as in a three-dimensional image displayed on the HMD 302 of the viewer terminal 300, a direction of the heat map displayed on the HMD 201 may dynamically vary in correspondence with user's head movement detected by the sensor of the HMD 201) repeating the steps of: o calculating the measured point of intersection of the optical axis of the virtual camera ([0026]: The reproduction device 301 has a function of detecting a region viewed by the viewer in the three-dimensional image displayed on the virtual three-dimensional space. Also see [0028]: Note that, a sight line detection sensor may be mounted on the HMD 302 to detect an actual sight line of the viewer, and a region in a predetermined range in a direction of the sight line may be detected as the visual observation region by the viewer) o storing for each calculated point of intersection the accumulated amount of time during which the calculated point of intersection is intersected by the optical axis of the virtual camera ([0042]: For each of a plurality of regions obtained by dividing the three-dimensional image on the basis of the visual observation region information acquired by the visual observation region information acquisition unit 11, the creation factor detection unit 13 detects the number of viewing times by the viewer and viewing time by the viewer as a creation factor for a heat map. ... In this manner, the creation factor detection unit 13 detects three factors including the number of times the region is viewed by the viewer, time for which the region is viewed by the viewer, and a movement situation of the viewer's body when the region is viewed as the heat map creation factors); -after the time span has expired generating in the virtual 3D-environrnent a 3D heatmap using the accumulated amount of time for each calculated point of intersection and the calculated point of intersection ([0050]: The heat map creation unit 14 creates a heat map displayed on a three-dimensional space on the basis of a plurality of the creation factors detected by the creation factor detection unit 13. The heat map displayed on the three-dimensional space is map information that is displayed by determining a display mode corresponding to the visual observation situation and the body movement situation by the viewer for each of a plurality of unit times obtained by time-dividing the three-dimensional image displayed in the three-dimensional space, and for each of a plurality of regions obtained by spatially dividing the three-dimensional image). Mizuno does not explicitly recite the virtual 3D-object having a virtual 3D-surface and calculating the measured point of intersection of the optical axis of the virtual camera with the virtual 3D-surface of the virtual 3D-object. However Takahashi, in the same field of endeavor, discloses a system for three-dimensional gaze fixation analysis using eye tracking glasses (see Title). Takahashi discloses generating a heat map on surface of an object based on tracking user’s eye gaze information (p.454 right column second paraph: The gaze plots are displayed on the surface of the three-dimensional mesh model of the product such that the plots can be viewed from arbitrary directions. See Fig.11). PNG media_image1.png 264 452 media_image1.png Greyscale Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Takahashi into that of Mizuno and to include the limitation of the virtual 3D-object having a virtual 3D-surface and calculating the measured point of intersection of the optical axis of the virtual camera with the virtual 3D-surface of the virtual 3D-object in order to allow user to monitor where a subject is looking in real time as suggested by Takahashi (p.499 Section 1. Introduction lines 1-2). Regarding Claim 2, Mizuno modified by Takahashi further teaches or suggests wherein the 3D-heatmap is generated on the virtual 3D-surface (Takahashi Fig.2) of the virtual 3D-object ([0010]: The invention has been made to solve the problem, and an object thereof is to provide, with a heat map, multi-dimensional information on a user's viewing situation with respect to a three-dimensional image displayed as a VR content in a three-dimensional space). PNG media_image2.png 437 480 media_image2.png Greyscale The same reason to combine as that of Claim 1 is applied. Regarding Claim 3, Mizuno modified by Takahashi further teaches or suggests wherein the method is followed by displaying the generated 3D-heatmap on a display means, for example the display means arranged in the real world for displaying the images captured by the virtual camera (Mizuno [0024]: The three-dimensional image reproduced by the reproduction device 301 is displayed by the HMD 302 worn by the viewer. Also see Takahashi Fig.2). The same reason to combine as that of Claim 1 is applied. Regarding Claim 11, Mizuno teaches or suggests wherein the display means augment a view of the real world and wherein the orientation and position of the display means is linked to the orientation and position of the virtual camera ([0009]: As described above, when a user's gazing point on the VR content is analyzed and is displayed with a heat map). Regarding Claim 12, Mizuno teaches or suggests wherein the control means comprise at least one accelerometer and optionally a GPS sensor, to obtain a position and orientation of the control means ([0025]: A gyro sensor or an acceleration sensor is mounted on the HMD 302 on which the three-dimensional image is displayed, and movement of the viewer's head can be detected. In addition, the reproduction device 301 controls reproduction of the three-dimensional image so that the three-dimensional space realized on display of the HMD 302 dynamically varies in correspondence with the movement of the viewer's head which is detected by the sensor of the HMD 302. [0055]: As in a three-dimensional image displayed on the HMD 302 of the viewer terminal 300, a direction of the heat map displayed on the HMD 201 may dynamically vary in correspondence with user's head movement detected by the sensor of the HMD 201). Regarding Claim 13, Mizuno further teaches or suggests wherein the control means are fixedly arranged to the display means (Fig.1: controller 303 within the viewer terminal 300). PNG media_image3.png 396 536 media_image3.png Greyscale Claims 8-10 are rejected under 35 U.S.C. 103 as being unpatentable over Mizuno (US 2021/0217223 A1) in view of Takahashi et al. (A system for three-dimensional gaze fixation analysis using eye tracking glasses, Journal of Computational Design and Engineering 5 (2018) 449-457) as applied to Claim 1 above, and further in view of Santos et al. (Analyzing AR Viewing Experience through Analytics Heat Maps for Augmented Content, 2017 19th Symposium on Virtual and Augmented Reality). Regarding Claim 8, Mizuno modified by Takahashi fails to disclose generating virtual floor and generating a 2D-heatmap. However Santos teaches generating a 2D grid on the pattern (p.139 right column 2nd paragraph) and providing a heat map of the regions within the pattern plane. ... by accumulating the time spent in each square of the 3D grid, it is possible to render the resultant 2D heat map as shown in the bottom part of figure 4 (p.139 3rd paragraph). PNG media_image4.png 879 518 media_image4.png Greyscale Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Santos into that of Mizuno modified by Takahashi and to include the limitation of generating in the virtual environment a virtual floor; during the time span in which movement of the virtual camera is controlled by the user in addition repeating the steps of: calculating the point of intersection of the normal to the virtual floor with the virtual camera; and storing for each calculated point of intersection the accumulated amount of time during which the calculated point of intersection is intersected by the normal to the virtual floor; after the time span has expired generating a 2D-heatmap using the accumulated amount of time during which the calculated point of intersection is intersected by the normal to the virtual floor so that By visualizing the heat map, it is possible to analyze which part of the logo is mostly focused, and understand if viewers are directing their attention as desired, on a publicity campaign for example as suggested by Santos (p.140 left column lines 1-4). Regarding Claim 9, Santos further discloses wherein the 2D-heatmap is generated on the virtual floor (Fig.4). The same reason to combine as that of Claim 8 is applied. Regarding Claim 10, Santos further teaches or suggests wherein the method is followed by displaying the generated 2D-heatmap on a display means, for example the display means arranged in the real world for displaying the images captured by the virtual camera (Fig.4 and p.139 right column last two lines to p.140 left column line 1: The two-dimensional heat map is particularly useful for cases that provide a planar augmentation such as the logo example shown in figure 4). Allowable Subject Matter Claims 4-10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Prior art, either individually or in combination, fails to disclose or render obviousness the limitation of defining a cone shaped envelope of lines having the optical axis of the virtual camera as center line, the cone shaped envelope having a half cone angle which is defined as the angle between the cone's axis of rotational symmetry and its sides; during a time span in which movement of the virtual camera is controlled by the user repeating the steps of: calculating the nearest point of intersection of each of the lines of the cone shape with the virtual object; and storing for each calculated point of intersection the accumulated amount of time multiplied by a number that depends on the half cone angle; after the time span has expired generating in the virtual 3D-environrnent a 3D heatmap using the accumulated amount of time for each calculated point of intersection and the calculated point of intersection as claimed in dependent Claim 4. The closest prior art, Siver et al. (US 2019/0019303 A1), discloses using a cone-like shape ([0056]) representing the location and orientation of a viewer’s device (Fig.19). However prior art fails to disclose above cited limitation in detail as a whole. Claims 5-7 are objected due to their dependency on Claim 4. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUN HE whose telephone number is (571)270-7218. The examiner can normally be reached M-F 8:00-5:00 MT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YINGCHUN HE/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
Oct 09, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602886
LOW LATENCY HAND-TRACKING IN AUGMENTED REALITY SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12588711
METHOD AND APPARATUS FOR OUTPUTTING IMAGE FOR VIRTUAL REALITY OR AUGMENTED REALITY
2y 5m to grant Granted Mar 31, 2026
Patent 12586247
IMAGE DISTORTION CALIBRATION DEVICE, DISPLAY DEVICE AND DISTORTION CALIBRATION METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12586491
Display Device and Method for Driving the Same
2y 5m to grant Granted Mar 24, 2026
Patent 12579949
IMAGE PROCESSING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
96%
With Interview (+14.4%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 644 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month