DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on May 23, 2024 complies with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
35 USC § 101 Statutory Analysis
The claims do not recite any of the judicial exceptions enumerated in the 2019 Revised Patent Subject Matter Eligibility Guidance. Further, the claims do not recite any method of organizing human activity, such as a fundamental economic concept or managing interactions between people. Finally, the claims do not recite a mathematical relationship, formula, or calculation. Thus, the claims are eligible because they do not recite a judicial exception.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function witLit the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a value movement portion” configured to “identify a center coordinate of an object included in an image” in claim 8; “a converter” configured to “convert the center coordinate into object location information in a real space” in claim 8; and “a visualizer” configured to “visualize 6DoF of the object by labeling the 6DoF based on the object location information” in claim 8; “the converter” is further configured to “determine a size of a real space corresponding to a pixel of the image based on an actual size of a space, where the object is located in a real space, and resolution of the image, and determine the object location information in the real space based on the center coordinate of the object and the size of the real space corresponding to the pixel of the image” in claim 9; “the converter” is further configured to determine an x value of the object location information in the real space using an X value of the center coordinate, an actual horizontal length of a space visible on the image, and a resolution horizontal value of the image” in claim 10; “the converter” is further configured to “determine a y value of the object location information in the real space using a Y value of the center coordinate, an actual vertical length of a space visible on the image, and a resolution vertical value of the image” in claim 11; “the converter” is further configured to “determine a z value of the object location information in the real space by inputting an x value of the object location information in the real space and a y value of the object location information in the real space to a matrix representing a relationship between the x value of the object location information in the real space, the y value of the object location information in the real space, and the z value of the object location information in the real space” in claim 12; “the visualizer” is further configured to “determine a camera parameter representing a relationship between a camera and the object based on information about a location and rotation of the camera, the object location information in the real space, and information about the rotation; and label a location of the object existing in three-dimensions in the image of two-dimensions by applying a camera parameter to the object location information in the real space and perform visualization” in claim 13; and “the visualizer” is further configured to “display an interface for inputting annotations according to the object location information in the real space and the information about the rotation” in claim 14.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. §102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-14 are rejected under 35 U.S.C. §102(a)(1) as being anticipated by McCormac et al. (U.S. Patent Application Publication No. US 2019/0147220 A1) (hereafter referred to as “McCormac”).
The examiner would like to point out that the various “generic placeholders” identified in section 7 hereinabove are being interpreted under 35 U.S.C. 112(f) as described in FIG. 1.
FIG. 1 is a schematic diagram showing the hardware configuration of the labeling device for estimating 6DoF 100. The above-mentioned configuration of the labeling device for estimating 6DoF 100 is a functional configuration achieved by cooperation of the hardware configuration shown in FIG. 1 and a program. As shown in FIG. 1 and the description in paragraphs [0088] and [0089], the labeling device for estimating 6DoF 100 includes a processor 120, a memory, a storage, and an input/output IF as a hardware configuration. These are connected to each other by a bus. The processor 120 controls another configuration in accordance with a program stored in the memory, performs data processing in accordance with the program, and stores the processing result in the memory. The processor 120 can be a microprocessor. The memory stores a program executed by the CPU 120 and data. The memory can be a ROM (Read Only Memory).
With regard to claim 1, McCormac describes identifying a center coordinate of an object included in an image (see Figure 3B, element 320, sub-element p and refer for example to paragraphs [0068] and [0069], which describe a surface element or surfel, which are the area of a 2D object in the frame image, the “surfels” are identified by a center coordinate p); converting the center coordinate into object location information in a real space (refer for example to paragraphs [0068] and [0069], the surfels are described by object location information data in a three-dimensional coordinate system); and visualizing 6DoF of the object by labeling the 6DoF based on the object location information (see Figure 3A and refer for example to paragraphs [0070] and [0071], which describe the object labeling, and to paragraphs [0067], [0091], [0092] and [0106] which describes the visualization of the 6DoF of the object which has been labelled based on the object location information data).
As to claim 2, McCormac wherein the converting of the center coordinate comprises determining a size of a real space corresponding to a pixel of the image based on an actual size of a space, where the object is located in a real space, and resolution of the image and determining the object location information in the real space based on the center coordinate of the object and the size of the real space corresponding to the pixel of the image (refer for example to paragraphs [0066] and [0093] which discuss the size of the surfels, and to paragraphs [0066] and [0091] which discuss the resolution of the surfels in the image and which discuss the determining the object location information in the real space based on the center coordinate of the object and the size of the real space corresponding to the pixel of the image).
With regard to claim 3, McCormac describes wherein the determining of the object location information comprises determining an x value of the object location information in the real space using an X value of the center coordinate, an actual horizontal length of a space visible on the image, and a resolution horizontal value of the image (refer for example to paragraph [0093]).
As to claim 4, McCormac describes wherein the determining of the object location information comprises determining a y value of the object location information in the real space using a Y value of the center coordinate, an actual vertical length of a space visible on the image, and a resolution vertical value of the image (refer for example to paragraph [0093]).
In regard to claim 5, McCormac describes wherein the determining of the object location information comprises determining a z value of the object location information in the real space by inputting an x value of the object location information in the real space and a y value of the object location information in the real space to a matrix representing a relationship between the x value of the object location information in the real space, the y value of the object location information in the real space, and the z value of the object location information in the real space (refer for example to paragraphs [0069] and [0093]).
In regard to claim 6, McCormac describes further comprising determining a camera parameter representing a relationship between a camera and the object based on information about a location and rotation of the camera, the object location information in the real space, and information about the rotation, and the visualizing of the 6DoF comprises labeling a location of the object existing in three-dimensions in the image of two-dimensions by applying a camera parameter to the object location information in the real space and performing visualization (refer for example to paragraph [0050], [0053], [0055], [0056], [0064], [0091], [0130] and [0131]).
With regard to claim 7, McCormac describes wherein the visualizing of the 6DoF comprises displaying an interface for inputting annotations according to the object location information in the real space and the information about the rotation (refer for example to paragraph [0050], [0053], [0055], [0056], [0064], [0091], [0130] and [0131]).
As to claim 8, McCormac describes a value movement portion configured to identify a center coordinate of an object included in an image (see Figure 3B, element 320, sub-element p and refer for example to paragraphs [0068] and [0069], which describe a surface element or surfel, which are the area of a 2D object in the frame image, the “surfels” are identified by a center coordinate p); a converter configured to convert the center coordinate into object location information in a real space (refer for example to paragraphs [0068] and [0069], the surfels are described by object location information data in a three-dimensional coordinate system); and a visualizer configured to visualize 6DoF of the object by labeling the 6DoF based on the object location information (see Figure 3A and refer for example to paragraphs [0070] and [0071], which describe the object labeling, and to paragraphs [0067], [0091], [0092] and [0106] which describes the visualization of the 6DoF of the object which has been labelled based on the object location information data).
In regard to claim 9, McCormac describes wherein the converter is further configured to: determine a size of a real space corresponding to a pixel of the image based on an actual size of a space, where the object is located in a real space, and resolution of the image and determine the object location information in the real space based on the center coordinate of the object and the size of the real space corresponding to the pixel of the image (refer for example to paragraphs [0066] and [0093] which discuss the size of the surfels, and to paragraphs [0066] and [0091] which discuss the resolution of the surfels in the image and which discuss the determining the object location information in the real space based on the center coordinate of the object and the size of the real space corresponding to the pixel of the image).
With regard to claim 10, McCormac describes wherein the converter is further configured to determine an x value of the object location information in the real space using an X value of the center coordinate, an actual horizontal length of a space visible on the image, and a resolution horizontal value of the image (refer for example to paragraph [0093]).
As to claim 11, McCormac describes wherein the converter is further configured to determine a y value of the object location information in the real space using a Y value of the center coordinate, an actual vertical length of a space visible on the image, and a resolution vertical value of the image (refer for example to paragraph [0093]).
In regard to claim 12, McCormac describes wherein the converter is further configured to determine a z value of the object location information in the real space by inputting an x value of the object location information in the real space and a y value of the object location information in the real space to a matrix representing a relationship between the x value of the object location information in the real space, the y value of the object location information in the real space, and the z value of the object location information in the real space (refer for example to paragraphs [0069] and [0093]).
With regard to claim 13, McCormac describes wherein the visualizer is further configured to determine a camera parameter representing a relationship between a camera and the object based on information about a location and rotation of the camera, the object location information in the real space, and information about the rotation; and label a location of the object existing in three-dimensions in the image of two-dimensions by applying a camera parameter to the object location information in the real space and perform visualization (refer for example to paragraph [0050], [0053], [0055], [0056], [0064], [0091], [0130] and [0131]).
As to claim 14, McCormac describes wherein the visualizer is further configured to display an interface for inputting annotations according to the object location information in the real space and the information about the rotation (refer for example to paragraph [0050], [0053], [0055], [0056], [0064], [0091], [0130] and [0131]).
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Thibodeau, Birchfield, Jeong (‘821) and (‘481), Hou, Li and Fu all disclose systems similar to applicant’s claimed invention.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner be directed to Jose L. Couso whose telephone number is (571) 272-7388. The examiner can normally be reached on Monday through Friday from 5:30am to 1:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached on 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Center information webpage on the USPTO website. For more information about the Patent Center, see https://www.uspto.gov/patents/apply/patent-center. SLild you have questions about access to the Patent Center, contact the Patent Electronic Business Center (EBC) at 571-272-4100 or via email at: ebc@uspto.gov .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
/JOSE L COUSO/Primary Examiner, Art Unit 2667
February 3, 2026