DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
2. Applicant’s arguments with respect to claim(s) 1-17, 19 and 20 have been considered but are moot in view of a new ground(s) of rejection. The amendments to the claims necessitated the new ground(s) of rejection discussed below.
With respect to the last office action, Applicant amends claims, discusses the claims limitations, the prior arts of record (PARs) and further argues that the PARs do not meet the amended claims limitations (see Applicant’s Remarks)
In response, Examiner notes Applicant’s arguments, however, the PARs still meet the amended claims limitations for these reasons: As discussed below, the primary PAR or ELENBAS discloses personalized news retrieval system and further a method for monitoring a played content, Senda visual characteristics or object or image types and/or identifier to the displaying device at any moment of current monitoring time duration (broadcast or stream), acquires a displayed-page screenshot from that includes content(s) identification(s) (captured screen or background image or frame that includes content IDs) displaying device in response to the captured visual characteristics or object or image types and/or identifier (see figs.1+ Video Retrieval System “VRS” [0007-0009] and [0017-0025]; VRS receives broadcast segments; audio, video, text, EPG data including on-line guide; captures visual characteristics of objects or segments within a frame or background image) including location(s) parameters and sends the visual characteristics or object or image types and/or identifier to the displaying device at any moment of current monitoring time duration (broadcast or stream), wherein the displayed-page screenshot is obtained by performing screen shooting to a displayed page of the displaying device; extracting a first content identification code (object, segment, etc., ID including name(s), anchor or presenter) from the displayed-page screenshot, wherein the first content identification code is for identifying a to-be-monitored content that is currently played by the displaying device; and based on the first content identification code, determining whether the to-be-monitored content is compliant ([0007-0009], [0017-0025], [0028-0035] and [0038-0046]), VRS stores specific background information of the topic, genre, etc. within the library database; sets background information of topics, genre, etc. to generate personalized video data based on the background information of the topic, genre, etc. within the library database; auto scans receives broadcast video and EPG data including on-line guide, audio, video, text and other information segments, to detects various types of scene changes: movement of objects within frame or sets of frames to segment genres, manually, automatically using classifier; ranking, weighting, rating, etc., including using AI and other knowledge based systems to adjust various factors and filters results (positive and/or negative); and performs other claimed limitations as discussed below; ELENBAS discloses personalized news retrieval system that includes visual characterizer to capture various content segments, BUT appears silent as to acquiring a displayed-page screenshot that includes content identification code(s) and sending the captured characteristics in real-time within the specific duration for processing; However, RATHOD discloses displaying user related contextual keywords, controls user selections, storing and performing various user interactions, further processes screenshots of the user that includes content IDs including communicating data to content provider or thirty-party systems for processing the data in real-time within a specific duration accordingly (see Abstract, figs.1+, [0344-0345], [0589-0590], [0625-0626] and [0659-0661]), as discussed below. Hence the amended claims do not overcome the PARs. The amendments to the claims necessitated the new ground(s) of rejection discussed below. This office action is made FINAL.
Claim Rejections - 35 USC § 103
3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
4. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over ELENBAAS et al (2005/0028194) in view of RATHOD (2022/0179665).
As to claims 1-4, ELENBAS discloses personalized news retrieval system and further a method for monitoring a played content, wherein the method comprises:
Sending visual characteristics or object or image types and/or identifier to the displaying device at any moment of current monitoring time duration (broadcast or stream), acquiring a displayed-page screenshot from that includes content(s) identification(s) (captured screen or background image or frame that includes content IDs) displaying device in response to the captured visual characteristics or object or image types and/or identifier (figs.1+ Video Retrieval System “VRS” [0007-0009] and [0017-0025]; VRS receives broadcast segments; audio, video, text, EPG data including on-line guide; captures visual characteristics of objects or segments within a frame or background image) including location(s) parameters and sends the visual characteristics or object or image types and/or identifier to the displaying device at any moment of current monitoring time duration (broadcast or stream), wherein the displayed-page screenshot is obtained by performing screen shooting to a displayed page of the displaying device; extracting a first content identification code (object, segment, etc., ID including name(s), anchor or presenter) from the displayed-page screenshot, wherein the first content identification code is for identifying a to-be-monitored content that is currently played by the displaying device; and based on the first content identification code, determining whether the to-be-monitored content is compliant ([0007-0009], [0017-0025], [0028-0035] and [0038-0046]), VRS stores specific background information of the topic, genre, etc. within the library database; sets background information of topics, genre, etc. to generate personalized video data based on the background information of the topic, genre, etc. within the library database; auto scans receives broadcast video and EPG data including on-line guide, audio, video, text and other information segments, to detects various types of scene changes: movement of objects within frame or sets of frames to segment genres, manually, automatically using classifier; ranking, weighting, rating, etc., including using AI and other knowledge based systems to adjust various factors and filters results (positive and/or negative);
Based on the first content identification code, performing first-stage checking, wherein the first-stage checking is for determining whether the to-be-monitored content has an abnormality; if the first-stage checking is not passed, performing second-stage checking to the displayed-page screenshot, wherein the second-stage checking is for determining an abnormality category of the to-be-monitored content; and based on a result of the second-stage checking, determining whether the to-be-monitored content is compliant; performing frame-abnormality detection to the displayed-page screenshot, to determine whether the to-be-monitored content has an abnormal frame of a first type and/or an abnormal frame of a second type, wherein the first type is a type of screen abnormality, and the second type is a type comprising content abnormality; and if an abnormal frame of the first type exists, determining that the to-be-monitored content is compliant ([0017-0025], [0028-0035] and [0038-0046]), VRS detects IDs of visual objects or segments; further includes Visual Characterizer, Classification system, filtering or Sorting system, segment provides sets of characteristics to automatic, semiautomatic, etc. segmenting of the video information within a specific time, using other EPG information and based on the background information or image recognition techniques and other features of the video; performs rankings, weightings, ratings, etc.; uses predetermined set(s) of text segments, audio segments, key frames and other features within the video stream to detect scene changes: movement, semantics, etc. ranking, weighting, etc. by time (duration), adjust various factors and filters results (positive and/or negative); filters frame features based on frequency or duration of the various predetermined factors;
and if an abnormal frame of the second type exists, determining that the to-be-monitored content is non-compliant and inputting the display ed-page screenshot into a first detecting model, and according to a result outputted by the first detecting model, determining whether the to-be-monitored content has an abnormal frame of the first type; and/or inputting the displayed-page screenshot into a second detecting model, and according to a result outputted by the second detecting model, determining whether the to-be-monitored content has an abnormal frame of the second type; wherein the first detecting model is obtained by training a first neural network by using a plurality of images carrying the screen abnormality as training samples, and the second detecting model is obtained by training a second neural network by using a plurality of images carrying an abnormal content as training samples; sending the displayed-page screenshot to a content issuing server (Knowledge based systems: fuzzy logic, neural nets, semantic processing and other techniques), to cause the content issuing server to perform the second-stage checking; and the step of, based on the result of the second-stage checking, determining whether the to-be- monitored content is compliant comprises: based on the result of the second-stage checking returned by the content issuing server, determining whether the to-be-monitored content is compliant ([0017-0025], [0028-0035] and [0038-0046]), VRS detects IDs of visual objects or segments; using various knowledge based systems, fuzzy logic, neural nets, semantic processing and other techniques: learning, training, etc., to detect visual content: name(s), anchor, people, etc., to Classify, filter and/or Sort, content segments using sets of characteristics to automatic, semiautomatic, etc. segmenting of the video information within a specific time, using other EPG information and based on the background information or image recognition techniques and other features of the video; performs rankings, weightings, ratings, etc.; uses predetermined set(s) of text segments, audio segments, key frames and other features within the video stream to detect scene changes: movement, semantics, etc. ranking, weighting, etc. by time (duration), adjust various factors and filters results (positive and/or negative); filters frame features based on frequency or duration of the various predetermined factors
ELENBAS discloses personalized news retrieval system that includes visual characterizer to capture various content segments, BUT appears silent as to acquiring a displayed-page screenshot that includes content identification code(s) and sending the captured characteristics in real-time within the specific duration for processing.
However, RATHOD discloses displaying user related contextual keywords, controls user selections, storing and performing various user interactions, further processes screenshots of the user that includes content IDs including communicating data to content provider or thirty-party systems for processing the data in real-time within a specific duration accordingly (Abstract, figs.1+, [0344-0345], [0589-0590], [0625-0626] and [0659-0661])
Hence it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the teaching of RATHOD into the system of ELENBAAS to use other application(s) for capturing contents with content ID codes within a screen or an interface for processing the desired specific application.
As to claim 6, ELENBAS further discloses personalized news retrieval system that includes visual characterizer to capture various content segments including filtering to play only desired contents or segments, BUT appears silent as to shutting down the displaying device, to forbid the displaying device from performing content playing; and/or sending an alarming signal, to indicate that the to-be-monitored content is non-compliant
However, RATHOD further discloses shutting down the displaying device, to forbid the displaying device from performing content playing; and/or sending an alarming signal, to indicate that the to-be-monitored content is non-compliant ([0017-0021], [0031-0034], [0061-0064], [0124-0128] and [0194-0196])
Hence it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the teaching of RATHOD into the system of ELENBAAS to restrict presentation of specific segments of contents as desired.
As to claim 7, ELENBAS further discloses the method further comprises: for a predetermined content that is to be sent to the displaying device to be played, generating an identity-code datum (keyword(s), specific genre or commercials) corresponding to the predetermined content; based on the identity-code datum, generating a second content identification code corresponding to the predetermined content; and sending the predetermined content and the second content identification code to the displaying device, to cause the displaying device to, when playing the predetermined content, exhibit the second content identification code in the displayed page; wherein the second content identification code serves as a comparison with the first content identification code, for determining whether the to-be-monitored content is compliant ([0017-0025], [0028-0035] and [0038-0046]), note remarks in claims 1-5.
As to claims 8-9, ELENBAS further discloses acquiring a second content identification code corresponding to a to-be-monitored playing time duration, wherein the second content identification code corresponds to a predetermined content that is played within the to-be-monitored playing time duration; and based on the first content identification code and the second content identification code, determining whether the to-be-monitored content is compliant, parsing the second content identification code and the first content identification code, to obtain first intermediate data corresponding to the second content identification code and the first content identification code individually, wherein the first intermediate data are intermediate data in a process of converting the identity-code datum corresponding to the content into the content identification code; and based on the first intermediate data corresponding to the second content identification code and the first content identification code individually, determining whether the to-be-monitored content is compliant;
wherein the first intermediate data are abstracts of the identity-code datum of a played content ([0019-0030] and [0033-0040]), note remarks claims 1-5
As to claim 10, ELENBAS further discloses wherein the second content identification code further a hiding region, the hiding region is for carrying a second intermediate datum corresponding to the identity-code datum, and the step of, based on the first intermediate data corresponding to the second content identification code and the first content identification code individually, determining whether the to-be-monitored content is compliant comprises: determining whether the first intermediate data corresponding to the second content identification code and the first content identification code individually are totally consistent; if the first intermediate data corresponding to the second content identification code and the first content identification code individually are not totally consistent, parsing the hiding regions of the second content identification code and the first content identification code, to obtain second intermediate data corresponding to the second content identification code and the first content identification code individually, wherein the second intermediate data are generated before the first intermediate data; and based on the second intermediate data corresponding to the second content identification code and the first content identification code individually, determining whether the to-be-monitored content is compliant ([0017-0025], [0028-0035] and [0038-0046]), VRS detects IDs of visual objects or segments; hiding regions, generating split screens, segments appearing and disappearing (anchor, commercials, etc.); furthermore uses various knowledge based systems, fuzzy logic, neural nets, semantic processing and other techniques: learning, training, etc., to detect visual content: name(s), anchor, people, etc., to Classify, filter and/or Sort, content segments using sets of characteristics to automatic, semiautomatic, etc. segmenting of the video information
As to claim 11-12, ELENBAS further discloses wherein the step of, based on the second intermediate data corresponding to the second content identification code and the first content identification code individually, determining whether the to-be-monitored content is compliant comprises: comparing the two second intermediate data bit by bit (movements of frames, segments of content, categories, sub-categories, etc.); if the two second intermediate data are totally consistent, determining that the to-be-monitored content is compliant; and if the two second intermediate data are not totally consistent, determining that the to-be-monitored content is non-compliant, wherein if the two second intermediate data are not totally consistent, the method further comprises: acquiring an inconsistent field of the two second intermediate data, and a content attribute identified by the inconsistent field; and based on the content attribute (features or semantics) identified by the inconsistent field, generating an alarming signal of a corresponding grade (appearing, disappearing, etc., content segments) wherein the content attribute comprises at least one of an identifier attribute and a playing-time-duration attribute of the displaying device ([0017-0025], [0028-0035] and [0038-0046]), content segments and sub content segments or set of segments are identify based on location and composite parameters, may be analyzed for sounds of laughter, explosions, gunshots, cheers, etc. and movements.
As to claim 13-15, ELENBAS further discloses wherein the step of, based on the identity-code datum, generating the second content identification code of the predetermined content comprises: performing a plurality of types of encoding iteratively to the identity-code datum, to obtain an intermediate datum for each of the types of encoding; and mapping a first intermediate datum in the intermediate data corresponding to the plurality of types of encoding to a plurality of checkpoints in a predetermined transparent image, to obtain the second content identification code; wherein the first intermediate datum is a decimal grayscale datum, and numerical values at different positions in the first intermediate datum correspond to checkpoints at different positions, wherein the step of performing the plurality of types of encoding iteratively to the identity-code datum, to obtain the intermediate datum for each of the types of encoding comprises: performing binary conversion to the identity-code datum; performing hexadecimal conversion to an intermediate datum obtained by the binary conversion; and performing decimal conversion to an intermediate datum obtained by the hexadecimal conversion, to obtain the decimal grayscale datum and wherein the predetermined transparent image is provided with a datum hiding region, the datum hiding region is for carrying a second intermediate
datum corresponding to the identity-code datum, and the method further comprises: acquiring a second intermediate datum in the intermediate data corresponding to the plurality of types of encoding, wherein the second intermediate datum is generated before the first intermediate datum, and the second intermediate datum is a binary datum; and based on an encoding value of each of bits in the second intermediate datum, updating the content segment presentation accordingly (a grayscale value) of the datum hiding region ([0017-0025], [0027-0035] and [0038-0046]), content segments and sub content segments or set of segments are identify based on location and composite parameters, may be analyzed for sounds of laughter, explosions, gunshots, cheers, etc. and movements; background with content segments appearing and/or disappearing with specific degree of composition
ELENBAS discloses personalized news retrieval system that includes visual characterizer to capture various content segments, BUT appears silent as to wherein the first intermediate datum is a decimal grayscale datum, and numerical values at different positions in the first intermediate datum correspond to checkpoints at different positions, performing binary conversion to the identity-code datum; performing hexadecimal conversion to an intermediate datum obtained by the binary conversion; and performing decimal conversion to an intermediate datum obtained by the hexadecimal conversion, to obtain the decimal grayscale datum
However, RATHOD discloses displaying user related contextual keywords, controls user selections, storing and performing various user interactions, further processes screenshots of the user that includes content IDs and further discloses decimal grayscale datum, and numerical values at different positions in the first intermediate datum correspond to checkpoints at different positions, performing binary conversion to the identity-code datum; performing hexadecimal conversion to an intermediate datum obtained by the binary conversion; and performing decimal conversion to an intermediate datum obtained by the hexadecimal conversion, to obtain the decimal grayscale datum or superimposed content segments or images accordingly over other content segments or images ([0120-0124], [0143-0147], [0344-0345], [0589-0590], [0625-0626] and [0659-0661]), uses standardized encoding modes (numeric, alphanumeric, byte/binary, kanji, etc., for processing of various content segments within desired screen locations or regions
Hence it would have been obvious before the effective filing date of the claimed invention to one of ordinary skill in the art to incorporate the teaching of RATHOD into the system of ELENBAAS to use other application(s) for various encoding to superimposed content segments accordingly.
As to claims 16, the claimed “A playing controlling mainframe…” is composed of the same structure elements with respect to claims 1-5.
As to claims 17, the claimed “A system for monitoring playing…” is composed of the same structure elements with respect to claims 1-5.
As to claims 18, the claimed “A apparatus for monitoring playing…” is composed of the same structure elements with respect to claims 1-5.
As to claims 19, the claimed “A computer-readable medium…” is composed of the same structure elements with respect to claims 1-5.
As to claims 20, the claimed “A electronic device…” is composed of the same structure elements with respect to claims 1-5.
Conclusion
5. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANNAN Q SHANG whose telephone number is (571)272-7355. The examiner can normally be reached Monday-Friday 7-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRUCKART BENJAMIN can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANNAN Q SHANG/ Primary Examiner, Art Unit 2424
ANNAN Q. SHANG