DETAILED ACTION
Claims 1, 4-9 and 12-20 are pending in this application. Claims 1, 9 and 17-18 are amended in this application, claims 2-3 and 10-11 are canceled, and claims 19 and 20 are newly added.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
The present application claims priority to PROVISIONAL application 63/212188 filed 06/18/2021. Applicant’s claim for the benefit of a prior-filed US provisional application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged.
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/04/2025 has been entered.
Response to Arguments
35 USC § 103
Applicant’s arguments with respect to the rejections of claims 1-17 under 35 U.S.C. 103 have been fully considered and are not persuasive. Applicant argues that neither Ding nor Aman teaches “a database that maps timestamps to color classes”, where this mapping is used to anticipate color classes which may appear. The examine respectfully disagrees, Aman teaches that each player has their uniform, as well as hair and skin tones classified by color, and then these classes are stored with the player’s information as part of the system method of classification (Aman, [0226]- [0227] and [0379]). The system further uses these color classes as a means of player tracking and anticipation or prediction of the player or other object’s next location using classifier such as the color, where this tracking result is a timestamp (Aman, [0022], [0027], [0433]- [0434]). Given the teachings of Aman, it would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention to use color classes and timestamps to anticipate objects and object classes (such as color) classes in videos.
Applicant further argues with respect to claims 6 and 14, that Ding does not teach a matched mask classification. The examiner respectfully disagrees, Ding teaches in [0062] that the system detects a query object, and then masks the object and detects the color of the object, which would be analogous to the “matched mask” function claimed in claims 6 and 14 as understood by one of ordinary skill in the art. Therefore, for at least these reasons, the examiner respectfully maintains the rejections over Ding in view of Aman.
PNG
media_image1.png
704
322
media_image1.png
Greyscale
PNG
media_image2.png
42
316
media_image2.png
Greyscale
(Aman, [0433]- [0434])
Applicant’s arguments with respect to the rejections of claim 18 and newly added claims 19 and 20 under 35 U.S.C. 103 over Ding and Aman in view of Bahou have been fully considered and are persuasive. Given the change of scope added by the amendments to claim 18, a new grounds of rejection is presented and fully discussed below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 4-9 and 12-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ding (US 20210027497 A1, Filed Jul. 22, 2019) in view of Aman (US 20150297949 A1, Filed Feb. 20, 2015).
Regarding claim 1 Ding discloses: A method for classifying objects in an image using a color-based machine learning classifier (Ding, Figure 10, element 1014, Object Detection Neural Network Manager), the method comprising:
training, with a dataset comprising a plurality of images (Ding, [0051] Neural network is trained with a dataset of images), a machine learning classifier (Ding, Figure 10, element 1014 Object Detection Neural network Manager) to classify an object in a given image into a color class from a set of color classes each representing a distinct color (Ding, Figure 10, element 1018 Color matching manager, Figure 11, step 1150 classifying the object as the color based on the color matching score),
PNG
media_image3.png
214
366
media_image3.png
Greyscale
(Ding, [0051])
wherein the color class represents a predominant color of the object and wherein the set of color classes is of a first size (Ding [0098] Size of the color similarity threshold for a mapped query color (mapped color is the color for the object));
receiving an input image (Ding, Figure 10 element 1022, Digital Images) depicting at least one object belonging to the set of color classes (Ding, Figure 12, steps 1120-1150 Mapping the query color to points in a color space, detecting a query object in the images, generating a color matching score for the query object);
determining, from the set of color classes (Ding, Figure 12, elements 1110-1150 Mapping the query color to points in a color space, detecting a query object in the images, generating a color matching score for the query object),
[a subset of color classes that are anticipated to be in the input image based on the metadata comprising a timestamp and an identifier of a source location of the input image by:]
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify associated with the timestamp of the input image;]
and including, in the subset of color classes, color classes associated with the subset of objects (Ding, [0058] objects are detected and identified based upon color, examiner interprets this as the color class being associated with the object and [0151] images are managed by a digital image manager, images can be stored in a database managed by the digital image manager);
PNG
media_image4.png
150
364
media_image4.png
Greyscale
(Ding, [0058])
PNG
media_image5.png
190
362
media_image5.png
Greyscale
PNG
media_image6.png
54
366
media_image6.png
Greyscale
(Ding, [0151])
generating a matched mask input indicating the subset of color classes in the input image (Ding, [0076] The color classification system generates a mask of the object for each of the query colors), wherein the subset of color classes is of a second size that is smaller than the first size (Ding, [0086] the color classification system identifies the color values for a query color from metadata, where the query color is selected from a list of colors, therefore the query color in this case would be a subset of the color list, and would then be smaller because it only contains the one color vs the list contains multiple);
inputting both the input image and the matched mask input into the machine learning classifier (Ding, figure 10, element 1010 Digital Image manager and 1016 Object mask generator), wherein the machine learning classifier is configured to classify the at least one object into at least one color class of the subset of color classes (Ding, [0030] Color classification system will generate a color similarity score in connection with object detection, to identify the color corresponding to the query object);
PNG
media_image7.png
316
378
media_image7.png
Greyscale
(Ding, [0030])
and outputting the at least one color class (Ding, Figure 11 element 1150 Classifying the color based on the color-matching score).
PNG
media_image8.png
632
568
media_image8.png
Greyscale
(Ding, Figure 10)
PNG
media_image9.png
538
550
media_image9.png
Greyscale
(Ding, Figure 11)
Ding does not teach; a subset of color classes that are anticipated to be in the input image based on the metadata comprising a timestamp and an identifier of a source location of the input image by:
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify associated with the timestamp of the input image;
However, in the same field of endeavor of machine classification Aman teaches;
a subset of color classes that are anticipated to be in the input image based on (Aman, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, [0377) facial recognition includes extraction of the image to determine skin tone and other features such as topological features to determine who the player is, further, [0226]-[0227] that a color table of possible skin and uniform tones of players are saved to be used in decoding the video, where the participants which are expected or anticipated to be playing in the game are stored as well, which indicates that the color classes which are used to identify the participants are stored in a database where they can be used to identify players expected to play in the games), the metadata comprising a timestamp and an identifier of a source location of the input image by (Aman, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, [0377) facial recognition includes extraction of the image to determine skin tone (color classes) and other features such as topological features to determine who the player is):
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify (Aman, [0393] The process involves verifying the time stamps on the data such that all images/data used [0424] the system can determine the location of a player on the field and work to predict “exciting situations” developing within the game which indicates the ability to predict the movement of players/objects, [0433] location of presence of the player is predicted using the tracking system, [0434] data base has a player profile/information including colors classified as the team colors for each team, database information is used in player location prediction and player identification, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, where the colors are associated with the timestamps of the footage), a subset of objects associated with the timestamp of the input image (Aman, [0393] The process involves verifying the time stamps on the data such that all images/data (input images) used [0434] data base has a player profile/information including colors classified as the team colors for each team, database information is used in player location prediction and player identification, the players on the field or player in question would be the subset of the data, the whole dataset being all the players);
The combination of Ding and Aman would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the method of player tracking and identification of Aman discusses and a method of tracking and predicting scenarios on a field which allows the system to predict movement of objects coming in and out of view (see [0424], [0433]- [0434]) which would improve the system of Ding which identifies objects from images, but has no tracking method included. (Aman, [0424], [0433]- [0434])
Regarding claim 4 the combination of Ding and Aman teaches; The method of claim 1, wherein the matched mask input further identifies similar classes that the at least one object does not belong to (Ding, [0073] the color similarity threshold indicates the color of the object, but additionally can indicate alternative colors when there is not a match found.
PNG
media_image10.png
124
330
media_image10.png
Greyscale
(Ding, [0073] emphasis added)
Regarding claim 5 the combination of Ding and Aman teaches; The method of claim 1, wherein the machine learning classifier is a convolutional neural network (Ding, [0141] Teaches that multiple types of Neural networks have been used in object detection and color classification methods, however it does not note a CNN as being one of these options, however given that one of ordinary skill in the art would know that CNNs are commonly used in computer vision classification applications it would be obvious that a CNN could be used to complete the method of Ding).
Regarding claim 6 the combination of Ding and Aman teaches; The method of claim 1, wherein the machine learning classifier is configured to:
determine, for each respective color class in the set of color classes, a respective probability of the at least one object belonging to the respective color class (Ding, [0026] the color classification system can identify a color similarity region from a multidimensional color space, the system generates color matching scores for one or more colors based on color correspondence, object is classified as a color based on the score. Examiner is interpreting the use of a color matching score as being analogous to the probability of an object being that color);
PNG
media_image11.png
210
336
media_image11.png
Greyscale
(Ding, [0026], emphasis added)
and adjust the respective probability based on whether the respective color class is present in the matched mask input (Ding Figure 2, elements 206 and 208, the query object is detected in an image, bounding boxes are shown around the portion of the image with the query color (206), the color match is determined (208) based upon where the bounding box is placed, [0069] In additional embodiments the step 206 may be performed using an object mask system where the mask would be the target area for color detection, further, [0073] of Ding says that the step of 208 may include comparison of color similarity thresholds to determine the color. Ding [0105]- [108] also discusses that when the color matching score is being computed (probability) the model may determine that a certain portion of the object or pixels in the object are not valid or are not reflective of the overall object color and the model may adjust the score or included pixels to reflect this).
PNG
media_image12.png
380
536
media_image12.png
Greyscale
(Ding, figure 2, steps 206 and 208)
PNG
media_image13.png
164
328
media_image13.png
Greyscale
(Ding, [0069] emphasis added)
PNG
media_image14.png
124
332
media_image14.png
Greyscale
(Ding, [0073], emphasis added)
Regarding claim 7 the combination of Ding and Aman teaches; The method of claim 1, wherein the input image is a video frame of a livestream, and wherein the machine learning classifier classifies the at least one object in real-time (Aman, [0050] describes a real time automatic tracking and identification system, which would be able to track at least one object in real time).
The combination of Ding and Aman would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the real time tracking and identification system described in Aman would allow persons or objects to be tracked as an event unfolds, cutting out a delay in tracking and identification. (Aman [0050])
Regarding claim 8 the combination of Ding and Aman teaches; The method of claim 1, wherein the at least one object is a person wearing an outfit of a particular color (Ding, Figure 2, Steps 206 and 210 show an image where the classified object is an image of person wearing an outfit and the clothing is being classified).
PNG
media_image15.png
184
514
media_image15.png
Greyscale
PNG
media_image16.png
200
520
media_image16.png
Greyscale
(Ding figure 2)
Regarding claim 9 the combination of Ding and Aman teaches; A system for classifying objects in an image using a color-based machine learning classifier, the system comprising: a hardware processor (Ding, [0158] hardware processors are configures toe execute the program) configured to:
train, with a dataset comprising a plurality of images (Ding, [0051] Neural network is trained with a dataset of images), a machine learning classifier (Ding, Figure 10, element 1014 Object Detection Neural network Manager) to classify an object in a given image into a color class from a set of color classes each representing a distinct color (Ding, Figure 10, element 1018 Color matching manager, Figure 11, step 1150 classifying the object as the color based on the color matching score),
PNG
media_image3.png
214
366
media_image3.png
Greyscale
(Ding, [0051])
wherein the color class represents a predominant color of the object and wherein the set of color classes is of a first size (Ding [0098] Size of the color similarity threshold for a mapped query color (mapped color is the color for the object));
receive an input image (Ding, Figure 10 element 1022, Digital Images) depicting at least one object belonging to the set of color classes (Ding, Figure 12, steps 1120-1150 Mapping the query color to points in a color space, detecting a query object in the images, generating a color matching score for the query object);
determine, from the set of color classes (Ding, Figure 12, elements 1110-1150 Mapping the query color to points in a color space, detecting a query object in the images, generating a color matching score for the query object),
[a subset of color classes that are anticipated to be in the input image based on the metadata comprising a timestamp and an identifier of source location of the input image, by:]
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify associated with the timestamp of the input image;]
and including, in the subset of color classes, color classes associated with the subset of objects (Ding, [0058] objects are detected and identified based upon color, examiner interprets this as the color class being associated with the object and [0151] images are managed by a digital image manager, images can be stored in a database managed by the digital image manager);
PNG
media_image4.png
150
364
media_image4.png
Greyscale
(Ding, [0058])
PNG
media_image5.png
190
362
media_image5.png
Greyscale
PNG
media_image6.png
54
366
media_image6.png
Greyscale
(Ding, [0151])
generate a matched mask input indicating the subset of color classes in the input image (Ding, [0076] The color classification system generates a mask of the object for each of the query colors), wherein the subset of color classes is of a second size that is smaller than the first size (Ding, [0086] the color classification system identifies the color values for a query color from metadata, where the query color is selected from a list of colors, therefore the query color in this case would be a subset of the color list, and would then be smaller because it only contains the one color vs the list contains multiple);
input both the input image and the matched mask input into the machine learning classifier (Ding, figure 10, element 1010 Digital Image manager and 1016 Object mask generator), wherein the machine learning classifier is configured to classify the at least one object into at least one color class of the subset of color classes (Ding, [0030] Color classification system will generate a color similarity score in connection with object detection, to identify the color corresponding to the query object);
PNG
media_image7.png
316
378
media_image7.png
Greyscale
(Ding, [0030])
and output the at least one color class (Ding, Figure 11 element 1150 Classifying the color based on the color-matching score).
PNG
media_image8.png
632
568
media_image8.png
Greyscale
(Ding, Figure 10)
PNG
media_image9.png
538
550
media_image9.png
Greyscale
(Ding, Figure 11)
Ding does not teach; a subset of color classes that are anticipated to be in the input image based on a timestamp in metadata of the input image, by:
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify associated with the timestamp of the input image;
However, in the same field of endeavor of machine classification Aman teaches a subset of color classes that are anticipated to be in the input image based on (Aman, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, [0377) facial recognition includes extraction of the image to determine skin tone and other features such as topological features to determine who the player is), the metadata comprising a timestamp and an identifier of a source location of the input image by (Aman, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, [0377) facial recognition includes extraction of the image to determine skin tone (color classes) and other features such as topological features to determine who the player is):
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify (Aman, [0393] The process involves verifying the time stamps on the data such that all images/data used [0424] the system can determine the location of a player on the field and work to predict “exciting situations” developing within the game which indicates the ability to predict the movement of players/objects, [0433] location of presence of the player is predicted using the tracking system, [0434] data base has a player profile/information including colors classified as the team colors for each team, database information is used in player location prediction and player identification, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, where the colors are associated with the timestamps of the footage), a subset of objects associated with the timestamp of the input image (Aman, [0393] The process involves verifying the time stamps on the data such that all images/data (input images) used [0434] data base has a player profile/information including colors classified as the team colors for each team, database information is used in player location prediction and player identification, the players on the field or player in question would be the subset of the data, the whole dataset being all the players);
The combination of Ding and Aman would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the method of player tracking and identification of Aman discusses and a method of tracking and predicting scenarios on a field which allows the system to predict movement of objects coming in and out of view (see [0424], [0433]- [0434]) which would improve the system of Ding which identifies objects from images, but has no tracking method included. (Aman, [0424], [0433]- [0434])
Regarding claim 12 the combination of Ding and Aman teaches; The system of claim 9, wherein the matched mask input further identifies similar classes that the at least one object does not belong to (Ding, [0073] the color similarity threshold indicates the color of the object, but additionally can indicate alternative colors when there is not a match found.
PNG
media_image10.png
124
330
media_image10.png
Greyscale
(Ding, [0073] emphasis added)
Regarding claim 13 the combination of Ding and Aman teaches; The system of claim 9, wherein the machine learning classifier is a convolutional neural network(Ding, [0141] Teaches that multiple types of Neural networks have been used in object detection and color classification methods, however it does not note a CNN as being one of these options, however given that one of ordinary skill in the art would know that CNNs are commonly used in computer vision classification applications it would be obvious that a CNN could be used to complete the method of Ding).
Regarding claim 14 the combination of Ding and Aman teaches; The system of claim 9, wherein the machine learning classifier is configured to:
determine, for each respective color class in the set of color classes, a respective probability of the at least one object belonging to the respective color class (Ding, [0026] the color classification system can identify a color similarity region from a multidimensional color space, the system generates color matching scores for one or more colors based on color correspondence, object is classified as a color based on the score. Examiner is interpreting the use of a color matching score as being analogous to the probability of an object being that color);
PNG
media_image11.png
210
336
media_image11.png
Greyscale
(Ding, [0026], emphasis added)
and adjust the respective probability based on whether the respective color class is present in the matched mask input (Ding Figure 2, elements 206 and 208, the query object is detected in an image, bounding boxes are shown around the portion of the image with the query color (206), the color match is determined (208) based upon where the bounding box is placed, [0069] In additional embodiments the step 206 may be performed using an object mask system where the mask would be the target area for color detection, further, [0073] of Ding says that the step of 208 may include comparison of color similarity thresholds to determine the color. Ding [0105]- [108] also discusses that when the color matching score is being computed (probability) the model may determine that a certain portion of the object or pixels in the object are not valid or are not reflective of the overall object color and the model may adjust the score or included pixels to reflect this).
PNG
media_image12.png
380
536
media_image12.png
Greyscale
(Ding, figure 2, steps 206 and 208)
PNG
media_image13.png
164
328
media_image13.png
Greyscale
(Ding, [0069] emphasis added)
PNG
media_image14.png
124
332
media_image14.png
Greyscale
(Ding, [0073], emphasis added)
Regarding claim 15 the combination of Ding and Aman teaches; The system of claim 9, wherein the input image is a video frame of a livestream, and wherein the machine learning classifier classifies the at least one object in real-time (Aman, [0050] describes a real time automatic tracking and identification system, which would be able to track at least one object in real time).
The combination of Ding and Aman would be obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the real time tracking and identification system described in Aman would allow persons or objects to be tracked as an event unfolds, cutting out a delay in tracking and identification. (Aman [0050])
Regarding claim 16 the combination of Ding and Aman teaches; The system of claim 9, wherein the at least one object is a person wearing an outfit of a particular color (Ding, Figure 2, Steps 206 and 210 show an image where the classified object is an image of person wearing an outfit and the clothing is being classified).
Regarding claim 17 the combination of Ding and Aman teaches; A non-transitory computer readable medium (Ding, [0190] instructions may be stored on a non-transitory computer readable medium) storing thereon computer executable instructions for classifying objects in an image using a color-based machine learning classifier, including instructions for:
training, with a dataset comprising a plurality of images (Ding, [0051] Neural network is trained with a dataset of images), a machine learning classifier (Ding, Figure 10, element 1014 Object Detection Neural network Manager) to classify an object in a given image into a color class from a set of color classes each representing a distinct color (Ding, Figure 10, element 1018 Color matching manager, Figure 11, step 1150 classifying the object as the color based on the color matching score),
PNG
media_image3.png
214
366
media_image3.png
Greyscale
(Ding, [0051])
wherein the color class represents a predominant color of the object and wherein the set of color classes is of a first size (Ding [0098] Size of the color similarity threshold for a mapped query color (mapped color is the color for the object));
receiving an input image (Ding, Figure 10 element 1022, Digital Images) depicting at least one object belonging to the set of color classes (Ding, Figure 12, steps 1120-1150 Mapping the query color to points in a color space, detecting a query object in the images, generating a color matching score for the query object);
determining, from the set of color classes (Ding, Figure 12, elements 1110-1150 Mapping the query color to points in a color space, detecting a query object in the images, generating a color matching score for the query object),
[a subset of color classes that are anticipated to be in the input image based on the metadata comprising a timestamp and an identifier of source location of the input image, by:]
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify associated with the timestamp of the input image;]
and including, in the subset of color classes, color classes associated with the subset of objects (Ding, [0058] objects are detected and identified based upon color, examiner interprets this as the color class being associated with the object and [0151] images are managed by a digital image manager, images can be stored in a database managed by the digital image manager);
PNG
media_image4.png
150
364
media_image4.png
Greyscale
(Ding, [0058])
PNG
media_image5.png
190
362
media_image5.png
Greyscale
PNG
media_image6.png
54
366
media_image6.png
Greyscale
(Ding, [0151])
generating a matched mask input indicating the subset of color classes in the input image (Ding, [0076] The color classification system generates a mask of the object for each of the query colors), wherein the subset of color classes is of a second size that is smaller than the first size (Ding, [0086] the color classification system identifies the color values for a query color from metadata, where the query color is selected from a list of colors, therefore the query color in this case would be a subset of the color list, and would then be smaller because it only contains the one color vs the list contains multiple);
inputting both the input image and the matched mask input into the machine learning classifier (Ding, figure 10, element 1010 Digital Image manager and 1016 Object mask generator), wherein the machine learning classifier is configured to classify the at least one object into at least one color class of the subset of color classes (Ding, [0030] Color classification system will generate a color similarity score in connection with object detection, to identify the color corresponding to the query object);
PNG
media_image7.png
316
378
media_image7.png
Greyscale
(Ding, [0030])
and outputting the at least one-color class (Ding, Figure 11 element 1150 Classifying the color based on the color-matching score).
Ding does not teach; a subset of color classes that are anticipated to be in the input image based on a timestamp in metadata of the input image, by:
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify associated with the timestamp of the input image;
However, in the same field of endeavor of machine classification Aman teaches; a subset of color classes that are anticipated to be in the input image based on (Aman, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, [0377) facial recognition includes extraction of the image to determine skin tone and other features such as topological features to determine who the player is), the metadata comprising a timestamp and an identifier of a source location of the input image by (Aman, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, [0377) facial recognition includes extraction of the image to determine skin tone (color classes) and other features such as topological features to determine who the player is):
querying, using the timestamp and the identifier of the source location, a database provided by the source location that maps timestamps to color classes to identify (Aman, [0393] The process involves verifying the time stamps on the data such that all images/data used [0424] the system can determine the location of a player on the field and work to predict “exciting situations” developing within the game which indicates the ability to predict the movement of players/objects, [0433] location of presence of the player is predicted using the tracking system, [0434] data base has a player profile/information including colors classified as the team colors for each team, database information is used in player location prediction and player identification, [0379] images or video of a player from a time slice (time period or time stamp of footage) are analyzed to perform facial recognition which includes analyzing the upper portion of the image to classify the participants skin tone and hair and other features, where the colors are associated with the timestamps of the footage), a subset of objects associated with the timestamp of the input image (Aman, [0393] The process involves verifying the time stamps on the data such that all images/data (input images) used [0434] data base has a player profile/information including colors classified as the team colors for each team, database information is used in player location prediction and player identification, the players on the field or player in question would be the subset of the data, the whole dataset being all the players);
The combination of Ding and Aman would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the method of player tracking and identification of Aman discusses and a method of tracking and predicting scenarios on a field which allows the system to predict movement of objects coming in and out of view (see [0424], [0433]- [0434]) which would improve the system of Ding which identifies objects from images, but has no tracking method included. (Aman, [0424], [0433]- [0434])
Claims 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ding (US 20210027497 A1, Filed Jul. 22, 2019) in view of Aman (US 20150297949 A1, Filed Feb. 20, 2015) and in further view of McWilliams (US 20210275893 A1)
Regarding claim 18 the combination of Ding, Aman and McWilliams teaches; The method of claim 1, wherein the object that the machine learning classifier is trained to classify in the given image into the color class is an athlete wearing a jersey of a particular color (Aman, [0372]-[0373] the upper portion of the player including their jersey is classified by color), and wherein the database comprises team schedule data that maps timestamps to team color classes and indicates the timestamps of when particular teams are scheduled to play.
Neither Ding nor Aman teaches; and wherein the database comprises team schedule data that maps timestamps to team color classes and indicates the timestamps of when particular teams are scheduled to play.
However, McWilliams teaches; and wherein the database comprises team schedule data that maps timestamps to team color classes and indicates the timestamps of when particular teams are scheduled to play (McWilliams, [0027] when watching a live streaming of a game through the app, the app may provide information on the player which appears on the field, that players uniform color and their schedule for games to be played, [0033] the engine allows for matching a team’s jersey color and game schedule to a live stream of a game to indicate the player statistics for the players on the field. As shown in figure 1, the system stamps the footage with the time and date for where the game is played).
The combination of Ding, Aman and McWilliams would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the database of team schedules and scores of McWilliams would allow the system to pull up to date team schedules and combined with the player tracking and identification system of Aman would create a system capable of identifying a player and additionally pulling information about the team playing and score statistics to provide more information to the system user. (McWilliams, [0027] and [0033])
Regarding claim 19, the combination of Ding, Aman and McWilliams teaches; The system of claim 9, wherein the object that the machine learning classifier is trained to classify in the given image into the color class is an athlete wearing a jersey of a particular color (Aman, [0372]-[0373] the upper portion of the player including their jersey is classified by color), and wherein the database comprises team schedule data that maps timestamps to team color classes and indicates the timestamps of when particular teams are scheduled to play (McWilliams, [0027] when watching a live streaming of a game through the app, the app may provide information on the player which appears on the field, that players uniform color and their schedule for games to be played, [0033] the engine allows for matching a team’s jersey color and game schedule to a live stream of a game to indicate the player statistics for the players on the field. As shown in figure 1, the system stamps the footage with the time and date for where the game is played).
The combination of Ding, Aman and McWilliams would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the database of team schedules and scores of McWilliams would allow the system to pull up to date team schedules and combined with the player tracking and identification system of Aman would create a system capable of identifying a player and additionally pulling information about the team playing and score statistics to provide more information to the system user. (McWilliams, [0027] and [0033])
Regarding claim 20, the combination of Ding, Aman and McWilliams teaches; The non-transitory computer readable medium of claim 17, wherein the object that the machine learning classifier is trained to classify in the given image into the color class is an athlete wearing a jersey of a particular color (Aman, [0372]-[0373] the upper portion of the player including their jersey is classified by color), and wherein the database comprises team schedule data that maps timestamps to team color classes and indicates the timestamps of when particular teams are scheduled to play (McWilliams, [0027] when watching a live streaming of a game through the app, the app may provide information on the player which appears on the field, that players uniform color and their schedule for games to be played, [0033] the engine allows for matching a team’s jersey color and game schedule to a live stream of a game to indicate the player statistics for the players on the field. As shown in figure 1, the system stamps the footage with the time and date for where the game is played).
The combination of Ding, Aman and McWilliams would have been obvious to one of ordinary skill in the art prior to the effective filing date of the presently claimed invention. The motivation for the combination lies in that the database of team schedules and scores of McWilliams would allow the system to pull up to date team schedules and combined with the player tracking and identification system of Aman would create a system capable of identifying a player and additionally pulling information about the team playing and score statistics to provide more information to the system user. (McWilliams, [0027] and [0033])
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gever et al, Color Based Object Recognition, Pattern Recognition 32 (1999) 453-456, Feb. 4 1998.
Teaches multiple methods of mathematical color-based classification for objects. Used in the art as a basis for the development of numerous CNN based color classification systems.
Rachmadi et al, Vehicle Color Recognition using Convolutional Neural Network, Aug 15, 2018
Teaches a CNN for vehicle color classification and surveillance.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN M ELLIOTT whose telephone number is (703)756-5463. The examiner can normally be reached M-F 8AM-5PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.M.E./Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666