Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 5-8, 10-13 and 15 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by BULZACKI et al. (WO 2016/191856).
With respect to claim 1, BULZACKI et al. teach one or more cameras configured to capture a plurality of betting areas arranged in a two- dimensional orientation on a gaming table from diagonally above ([00222] discloses that the camera is elevated angle; Fig. 5 and 6) and generate an image including a plurality of gaming chips wagered in a betting area of the plurality of betting areas (Fig. 5 and 6, [00133], [00144]-[0146] discloses that several chip stacks are imaged at various 2D coordinates on the table);
a learning model configured to recognize a position, a type, or a number of gaming chips wagered on a relatively near place or on a relatively far place on the gaming table by a player by analyzing one image using trained artificial intelligence or deep learning ([00133], [00138] discloses the use of machine learning and depth and coordinates are considered by the data at [00137]-[0146]);
a device configured to acquire teacher data for generating or learning the learning model by capturing the images (The camera at [00144) is suitable for and configured to acquire image data, which is what the machine learning at [00138] would be trained on); and
a learning device configured to train the learning model using the teacher data, wherein the teacher data includes a plurality of images in which illumination conditions and chip hiding states are different from each other ([00132]-[00134], [00142)-[0144] disclose that images with
different illumination conditions and obstructions are used in the calibration process, which can use machine learning).
With respect to claim 2, BULZACKI et al. teach that the teacher data includes a training image showing a single unstacked gaming chip (See [00249], The system may also be trained to differentiate between new versions of chips from obsolete versions of chips, in order to do that system should be able to identify each chip).
With respect to claim 3, BULZACKI et al. teach that the teacher data includes a training image showing a second plurality of gaming chips stacked on top of each other (See [00231]).
With respect to claim 5, BULZACKI et al. teach that the learning model is configured to recognize the position, the type, and the number of the gaming chips wagered on both the relatively near place and the relatively far place (para [00060], Machine-learning techniques (e.g., random forests) may be utilized and refined such that visual features representative of different chip values are readily identified, despite variations between different facilities, lighting conditions and chip types.; para [00133] and [00137]-[00138]).
With respect to claim 6, BULZACKI et al. teach one or more cameras configured to capture a plurality of betting areas arranged in a two- dimensional orientation on a gaming table from diagonally above ([00222] discloses that the camera is elevated angle; Fig. 5 and 6) and generate an image including a plurality of gaming chips wagered in a betting area of the plurality of betting areas (Fig. 5 and 6, [00133], [00144]-[0146] discloses that several chip stacks are imaged at various 2D coordinates on the table);
a learning model configured to recognize a position, a type, or a number of gaming chips wagered on a relatively near place or on a relatively far place on the gaming table by a player by analyzing one image using trained artificial intelligence or deep learning ([00133], [00138] discloses the use of machine learning and depth and coordinates are considered by the data at [00137]-[0146]);
a device configured to acquire teacher data for generating or learning the learning model by capturing the images (The camera at [00144) is suitable for and configured to acquire image data, which is what the machine learning at [00138] would be trained on); and
a learning device configured to train the learning model using the teacher data, wherein the teacher data includes an image showing a stack of a plurality of gaming chips having a specific color that differs from each other on a side (para [00226]-[00227] and [00245])]).
With respect to claim 7, BULZACKI et al. teach that the teacher data includes a training image showing a single unstacked gaming chip (See [00249], The system may also be trained to differentiate between new versions of chips from obsolete versions of chips, in order to do that system should be able to identify each chip).
With respect to claim 8, BULZACKI et al. teach that the teacher data includes a training image showing a second plurality of gaming chips stacked on top of each other (See [00231]).
With respect to claim 10, BULZACKI et al. teach that the learning model is configured to recognize the position, the type, and the number of the gaming chips wagered on both the relatively near place and the relatively far place (para [00060], Machine-learning techniques (e.g., random forests) may be utilized and refined such that visual features representative of different chip values are readily identified, despite variations between different facilities, lighting conditions and chip types.; para [00133] and [00137]-[00138]).
With respect to claim 11, BULZACKI et al. teach one or more cameras configured to capture a plurality of betting areas arranged in a two- dimensional orientation on a gaming table from diagonally above ([00222] discloses that the camera is elevated angle; Fig. 5 and 6) and generate an image including a plurality of gaming chips wagered in a betting area of the plurality of betting areas (Fig. 5 and 6, [00133], [00144]-[0146] discloses that several chip stacks are imaged at various 2D coordinates on the table);
a learning model configured to recognize a position, a type, or a number of gaming chips wagered on a relatively near place or on a relatively far place on the gaming table by a player by analyzing one image using trained artificial intelligence or deep learning ([00133], [00138] discloses the use of machine learning and depth and coordinates are considered by the data at [00137]-[0146]);
a device configured to acquire teacher data for generating or learning the learning model by capturing the images (The camera at [00144) is suitable for and configured to acquire image data, which is what the machine learning at [00138] would be trained on); and
a learning device configured to train the learning model using the teacher data, wherein the teacher data includes an image of a plurality of gaming chips stacked out of alignment with each other (para [0061], and [00231]).
With respect to claim 12, BULZACKI et al. teach that the teacher data includes a training image showing a single unstacked gaming chip (See [00249], The system may also be trained to differentiate between new versions of chips from obsolete versions of chips, in order to do that system should be able to identify each chip).
With respect to claim 13, BULZACKI et al. teach that the teacher data includes a training image showing a second plurality of gaming chips stacked on top of each other (See [00231]).
With respect to claim 15, BULZACKI et al. teach that the learning model is configured to recognize the position, the type, and the number of the gaming chips wagered on both the relatively near place and the relatively far place (para [00060], Machine-learning techniques (e.g., random forests) may be utilized and refined such that visual features representative of different chip values are readily identified, despite variations between different facilities, lighting conditions and chip types.; para [00133] and [00137]-[00138]).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 4, 9 and 14 are rejected under 35 USC 103 as being unpatentable over BULZACKI et al. (WO 2016/191856) in view of ZHONG (CN 110678908)
BULZACKI et al. teach all the limitations of claim 1 as applied above from which claim 4 respectively depend.
BULZACKI et al. do not teach expressly that the teacher data includes a training image showing a gaming chip that is partially hidden due to a blind spot of the one or more cameras.
ZHONG teaches the teacher data includes a training image showing a gaming chip that is partially hidden due to a blind spot of the one or more cameras (page 8 last paragraph).
At the time of effective filing, it would have been obvious to a person of ordinary skill in the art to train data with part of data is hidden in the method of BULZACKI et al.
The suggestion/motivation for doing so would have been that so that accurately identify chip later.
Therefore, it would have been obvious to combine ZHONG with BULZACKI et al.to obtain the invention as specified in claim 4.
With respect to claim 9, claim 9 is rejected same reason as claim 4 above.
With respect to claim 14, claim 9 is rejected same reason as claim 4 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Randolph Chu whose telephone number is 571-270-1145. The examiner can normally be reached on Monday to Thursday from 7:30 am - 5 pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached on (571) 272-7778.
The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/RANDOLPH I CHU/
Primary Examiner, Art Unit 2667