DETAILED ACTION
[1] Remarks
I. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
II. Claims 1-24 are pending and have been examined, where claims 1-24 is/are rejected. Explanations will be provided below.
III. Inventor and/or assignee search were performed and determined no double patenting rejection(s) is/are necessary.
IV. Patent eligibility (updated in 2019) shown by the following: Claims 1-24 pass patent eligibility test because there is/are no limitation or a combination of limitations amounting to an abstract idea.
V. The PCT application, PCT/IL2021/051551, is considered and the examiner determined no reference prior art are relevant to the claims of the current application.
[2] Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
Use of the word “means” (or “step for”) in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word “means” (or “step for”) in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function.
Claim elements in this application that use the word “means” (or “step for”) are presumed to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action. Similarly, claim elements that do not use the word “means” (or “step for”) are presumed not to invoke 35 U.S.C. 112(f) except as otherwise indicated in an Office action.
Claim(s) 1-9 and 19-20 are not interpreted under 35 U.S.C. 112(f) or pre-AIA U.S.C. 112 6th paragraph because of the following reason(s): limitations are modified by sufficient structure or material for performing the claimed function.
Claim(s) 10-18 does not require 35 U.S.C. 112(f) or pre-AIA U.S.C. 112 6th paragraph interpretation because they are method claims and / or they are CRM claims.
Upon examination of the specification and claims, the examiner has determined, under the best understanding of the scope of the claim(s), rejection(s) under 35 U.S.C. 112(a)/(b) is not necessitated because of the following reasons: sufficient support are provided in the written description / drawings of the invention.
[3] Grounds of Rejection
Claim Rejections - 35 USC § 103
1. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
2. Claims 1-2, 5-7, 10-11, 14, 16-17, and 19-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20120284132) in view of Lyons (US 20200020024).
Regarding claim 1, Kim discloses a computerized system for recognizing an object two-dimensional (2D) virtual construct (see paragraph 101, digital signage display is where the 2D virtual is displayed, see figure 14, where the object is displayed within a virtual construct), the system comprising:
a) At least two panels, each consisting of at least one sensor operable to form the 3D virtual construct (see figure 3, 303 are cameras which include panels);
b) the object's database (see paragraph 15, storing the received information on the at least one or more products in a database along with the first ID information); and
c) a central processing module (CPM) in communication with the panel's sensors and the object database, the CPM comprising at least one processor and being in communication with a non-volatile memory storage device storing thereon a processor-readable media with a set of executable instructions configured (see paragraph 250, The recording medium that can be read by the processor includes all types of recording devices storing data that can be read by the processor. Examples of the recording media that can be read by a processor may include ROMs, RAMs, CD-ROMs, magnetic tapes, floppy disks, optical data storing devices, and so on, paragraph 15, wherein the information of at least one or more products is mapped to a first ID information for identifying a specific cart, storing the received information on the at least one or more products in a database along with the first ID information), when executed to cause the at least one processor to perform the step of: using the at least two panel sensors, detecting 2D virtual construct (see paragraph 78, detecting the type of the object that is placed between the light emitting unit and the light receiving unit).
Kim is silent in disclosing a computerized system for recognizing an object motion through a three-dimensional (3D) virtual construct; using the at least two panel sensors, detecting motion of the object through the 3D virtual construct.
Lyons discloses a computerized system for recognizing an object motion through a three-dimensional (3D) virtual construct (see paragraph 43, As the camera of the mobile computing device tracks the orientation and motion of the cube); and using the at least two panel sensors, detecting motion of the object through the 3D virtual construct (see paragraph 47, The trackable object may bear markers that allow for at least two detection distances and are capable of detection by relatively low-resolution cameras in multiple, common lighting situations, cameras indicates more than one camera).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to include recognizing an object motion through a three-dimensional virtual construct in order to achieve spatial awareness, which enables accurate object localization and interaction allowing the systems to understand an object's full shape, size, orientation, and its position.
Regarding claim 2, Lyons discloses the system of claim 1, wherein the 3D virtual construct forms a 2.5D or 3D slab-shaped region (see figure 2, 200 is a 3D slab-shaped region). See the motivation for claim 1.
Regarding claim 5, Kim discloses the system of claim 2, wherein the each of the at least two panel's at least one sensor comprises at least one of: a) a plurality of cameras; b) a LIDAR emitter and a LIDAR receiver; c) a LASER emitter and a LASER detector; d) a magnetic field generator; e) an acoustic transmitter and an acoustic receiver; and f) an electromagnetic radiation source (see figure 3, 303 are the plurality of cameras).
Regarding claim 6, Kim discloses the system of claim 5, wherein the panel's sensor is coupled to an open frame, operable to provide single-side detection, or two-side detection (see figure 3, the shopping cart have two side detection).
Regarding claim 7, Kim discloses the system of claim 6, wherein the open frame is coupled horizontally to at least one of: the apical end of an open cart, a self-checkout system, and vertically to a refrigerator opening, or to a refrigerator's shelf opening (see paragraph 58, The performance of purchase could be adding the product to a cart and then checking out, at the same time or at some later time, in which case the contents of the cart might be saved for later use, in an online purchasing system).
Regarding claims 10 and 19 see the rationale and rejection for claim 1. Also, a processor with CRM is taught in Kim, paragraph 250.
Regarding claims 11 and 21 see the rationale and rejection for claim 2.
Regarding claims 14 and 20 see the rationale and rejection for claim 5.
Regarding claim 16 see the rationale and rejection for claim 6.
Regarding claim 17 see the rationale and rejection for claim 7.
Regarding claim 22, Kim discloses the article of claim 19, comprising four panels consisting of at least one sensor operable to form a closed-frame 3D virtual construct (see figure 3, the shopping cart have four sides).
Regarding claim 23, Kim discloses the article of claim 22, comprising a plurality of cameras, operable upon a breach of a plane formed by the 3D virtual construct by an object, to capture an image of the breaching object from at least two angles (see figure 3, the imagers, 303s, are placed at different angles, 0 and 180 degrees).
3. Claims 3-4, 9, and 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20120284132) in view of Lyons (US 20200020024) and Takahashi (US 20190022492).
Regarding claim 3, Kim and Lyon disclose all the limitations of claim 2, but is silent in disclosing the system of claim 2, whereupon detecting of motion of the object through the 3D virtual construct, the set of executable instructions is further configured, when executed to cause the at least one processor to perform the step of: determining the trajectory of the object motion through the 3D virtual construct.
Takahashi discloses the system of claim 2, whereupon detecting of motion of the object through the 3D virtual construct, the set of executable instructions is further configured, when executed to cause the at least one processor to perform the step of: determining the trajectory of the object motion through the 3D virtual construct (see paragraph 127, a virtual environment construction apparatus that constructs a virtual environment to be experienced by a user based on a real environment in which another party launches a flying object, the virtual environment construction apparatus including … a presentation trajectory decision unit that decides a presentation trajectory representing a motion of the flying object corresponding to each scene included in the presentation sequence).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to include determining and analyzing the trajectory of object motion within a 3D virtual construct is important because it provides a data for training machine learning model for predictive analysis which ensures continuous monitoring hand over tracking between multiple cameras, optimizing system resource usage.
Regarding claim 4, Kim discloses the system of claim 3, whereupon detecting the trajectory of object motion through the 3D virtual construct, the set of executable instructions is further configured, when executed to cause the at least one processor to perform the step of: using the object database, recognizing the object (see paragraph 43, recognizes the markers and their orientation on the cube; a virtual product chosen by the user to evaluate is inserted in the display aligned with the orientation of the cube).
Regarding claim 9, Kim discloses the system of claim 4, wherein: a) each of the panels comprises a sensor operable to capture an image of the object (see figure 3, 303 are panels each encase a camera); and b) the set of executable instructions is further configured, when executed to cause the at least one processor to perform the step of: capturing an image of the object from at least two different sides (see figure 3, 303s are on opposite sides which are on two different sides).
Regarding claim 12 see the rationale and rejection for claim 3.
Regarding claim 13 see the rationale and rejection for claim 4.
3. Claims 15 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kim (US 20120284132) in view of Lyons (US 20200020024) and Alakarhu (US 20100328456).
Regarding claim 15, Kim and Lyons disclose all the limitation of claim the CRM of Claim 14, comprising at least six sensors, wherein the sensors are each a camera and wherein the VRG is defined by the overlap of the at least six camera's field of view.
Alakarhu discloses the CRM of Claim 14, comprising at least six sensors, wherein the sensors are each a camera and wherein the VRG is defined by the overlap of the at least six camera's field of view (see figure 4, cameras from 402 to N where N is at least 6, where the cameras are placed very close together to have their field of views overlap).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to include overlapping field of views coverage ensures that areas are monitored from multiple angles, removing potential hiding spots and guaranteeing continuous surveillance enabling features like 3D reconstruction and accurate subject tracking.
Regarding claim 24, see the rationale and rejection for claim 15, where the N number of cameras are employed to generate a composite in the computer (virtual world). See the motivation for claim 15.
[4] Claim Objections
Claim(s) 8 and 18 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
With regards to claim 8, the examiner cannot find any applicable prior art providing teachings for the following limitation(s): the system of claim 7, whereupon recognition of the object, the set of executable instructions is further configured, when executed to cause the at least one processor to perform the step of: a) if the motion trajectory detected is through the 3D virtual construct from the at least one of: the apical end of an open cart, the self-checkout system, and the refrigerator opening, or the refrigerator's shelf opening, to the outside, identifying an origination location of the object in the shopping cart; and b) if the motion trajectory detected is through the 3D virtual construct from outside the at least one of: the apical end of an open cart, the self-checkout system, and the refrigerator opening, or the refrigerator's shelf opening, to the inside, identifying a location of the object in the at least one of the open cart, the refrigerator, or the refrigerator's; in combination with the rest of the limitations of claims 1-2 and 5-7.
Takahashi discloses the system of claim 7, whereupon recognition of the object, the set of executable instructions is further configured, when executed to cause the at least one processor to perform the step of presentation trajectory representing a motion of the flying object corresponding to each scene included in the presentation sequence) from the at least one of:
Kim discloses the apical end of an open cart, the self-checkout system, and the refrigerator opening, or the refrigerator's shelf opening, to the outside, identifying an origination location of the object in the shopping cart (see paragraph 58, The performance of purchase could be adding the product to a cart and then checking out, at the same time or at some later time, in which case the contents of the cart might be saved for later use, in an online purchasing system).
Combining Kim and Takahashi to read on claim 8 would result in piece mealing.
Regarding claim 18 see the rationale for claim 8.
CONTACT INFORMATION
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEX LIEW (duty station is located in New York City) whose telephone number is (571)272-8623 (FAX 571-273-8623), cell (917)763-1192 or email alexa.liew@uspto.gov. Please note the examiner cannot reply through email unless an internet communication authorization is provided by the applicant. The examiner can be reached anytime.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, MISTRY ONEAL R, can be reached on (313)446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALEX KOK S LIEW/Primary Examiner, Art Unit 2674 Telephone: 571-272-8623
Date: 12/20/25