DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/08/2025 has been considered by the examiner and placed in applicant file.
Terminal Disclaimer
The terminal disclaimer filed on 01/07/2026 disclaiming the terminal portion of any patent granted on this application which would extend beyond the expiration date of Patent No.: US 12,327,390 B2 have been reviewed and is accepted. The terminal disclaimer has been recorded.
Claim Objections
Claim 17 is objected to because of the following informalities:
In claim 17 at line 5-9, the term “said sampling to produce a sample and prefiltering the sample using a local processor and locally stored software to determe whether the sample is a likely candidate to include objectionable content” should be changed to “said sampling to produce a sample and prefiltering the sample using a local processor and locally stored software to determine whether the sample is a likely candidate to include objectionable content” in order to correct the typographical error. Appropriate correction is required.
Response to Arguments
Double Patenting
The non-statutory double patenting rejection for claims 1-2, 6-7, 11-12, 15-16, and 21 has been withdrawn in light of the terminal disclaimer filed 01/07/2026.
Claim Rejections under 35 U.S.C. § 103
Applicant’s arguments (see remarks), filed 11/10/2025, with respect to the claims 1-2, 6-7, 11-12 and 15-22 have been fully considered but are respectfully unpersuasive.
Applicant argues on page 6, “The invention as recited in claim 1 is not only novel but also non-obvious with respect to the combination of HOLM and SHIREY”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated below.
Based on the breadth of the claim language, the prior art by HOLM et al. (US 20200302029 A1) explicitly teaches a method of local content filtering of images (Fig. 1. Paragraph [0106]-HOLM discloses FIG. 2 is a diagram showing a schematic of an exemplary processing flow of the present Intelligent Computer Vision System 54, installed on the Computing Device 52 of FIG. 1. Further in paragraph [0116]-HOLM discloses the present systems and methods for monitoring and/or filtering images of a selected computing device user uses a MLIC algorithm (e.g. CNN) (wherein images are classified as clean or non-clean). The computing device may be a cluster of devices including at least one of individual computers, remote servers, other devices capable of communicating interactively with said computing device, and mobile devices) for a mobile device (Fig. 1. Paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device. A local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices) comprising:
sampling (Fig. 1. Paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device) from a screen (Fig. 7. Paragraph [0122]-HOLM discloses the present systems and methods can receive the image directly from the Screen Capture Processor (that is, the numerical encoding used by the screen used by the user to view the image, said numerical encoding used by the screen to represent, store, and display the raw pixel information comprising visual media, such as still images, video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like. In paragraph [0123]-HOLM discloses the present systems and methods may receive video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like, and samples only a portion (for instance, in a video stream, sampling interval may be every half-second)) of the mobile device (Fig. 1. Paragraph [0106]-HOLM discloses a local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices, and other such computing devices) a live image (Fig. 1. Paragraph [0106]-HOLM discloses visual media may include still images, video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like) obtained from a local camera of the mobile device, said sampling to produce a sample (Fig. 1. Paragraph [0019]-HOLM discloses the received images comprise at least one of: screen data; data of image files stored in the memory of said computing device; data from a camera; data sent from a device capable of sending images; data from an HDMI processor; data sent from a device capable of sending videos; data sent from a device capable of sending analog images; data from another computing device. Further in paragraph The Image access activity may include one or more of a still image, video content, video frames, holographic images, other 3-dimensional images, virtual reality images, other such content, or combination of two or more of such content.);
prefiltering the sample using a local processor and locally stored software (Fig. 1. Paragraph [0105]-HOLM discloses the image access activity can include access of at least one image from still images, video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like. A method of using a monitoring system can include the steps of a computing device user voluntarily installing a monitoring program, or alternatively having a monitoring program pre-installed on a selected computing device, recording the Image access activity, and blocking (filtering) said image and/or providing the recorded information to a third party recipient. Further in paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device. For example, a local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices, and other such computing devices) to determine whether the sample is a likely candidate to include objectionable content (Fig. 1. Paragraph [0151]-HOLM discloses the process advances at step 210 to image classifier 202 in order to classify the image at 212 by way of step 214 to determine at step 216 whether the image is scored as clean or non-clean (or however the predetermined threshold classification is set). The Image Classifier 202 analyzes input images using a known MLIC algorithm, such as a convolutional neural network (CNN) model. In paragraph [0157]-HOLM discloses if at step 240, a decision not to block the image is determined by the Image Processor 204, then, via output 242, the process flow advances to a determination 246 of whether to obscure the image (wherein the learning of the machine learning model may also be improved and updated using image traffic in real time). Additionally, in paragraph [0109]-HOLM discloses “clean” is simply that which may be viewed “as is” without blocking and/or monitoring. In that sense, “clean” means an image that may contain a spectrum of related characteristics, ranging from one extreme (for example, A, where A is completely non-clean) to another extreme (for example, Z, where Z is completely clean) wherein the proximity on the spectrum being close to Z (with “close” being user- or system-defined) is considered worthy of the image being forwarded directly to the output device, while being close to A (with “close” being user- or system-defined) is considered worthy of a blocking process and/or a reporting process (wherein blocking can involve network/browser/web extension interception and the threshold can be based on criteria such as the source, a user, recent or proximate blocking history, type of content, numbers of images detected, image quality, white/black listing from prior filtering/review/analyses, historical human review of the source, preferences, and/or confidence levels). Please also see Fig. 2 and read paragraph [0075, 0115, 0162, 0166-0167, 0206 and 0218-0234]).
HOLM fails to explicitly teach responding in real time to said prefiltering by: eliminating said sample from further processing in response to said prefiltering determining that said sample is not a likely candidate and further analyzing the sample for the objectionable content using an artificial intelligence routine running on said local processor in response to said prefiltering determining that said sample is a likely candidate and taking an action in real time in response to when said further analyzing identifies the objectionable content in the sample.
However, SHIREY explicitly teaches responding in real time to said prefiltering (Fig. 1. Paragraph [0017]-SHIREY discloses referring to FIG. 1, a system for filtering content 100 is illustrated. Prior to presentation (e.g., display) of content (e.g., web content), the system 100 can determine which element(s) of a received document (e.g., web page) comprise non-desired content (e.g., a particular user would likely find to be non-desired) and takes an action (e.g., removing, blocking and/or graying) with regard to the determined element(s).) by:
eliminating said sample from further processing in response to said prefiltering determining that said sample is not a likely candidate and further analyzing the sample for the objectionable content using an artificial intelligence routine (Fig. 1. Paragraph [0026]-SHIREY discloses the filter component 120 applies the model 130 using one or more machine learning algorithms including linear regression algorithms, logistic regression algorithms, decision tree algorithms, support vector machine (SVM) algorithms, Naive Bayes algorithms, a K-nearest neighbors (KNN) algorithm, a K-means algorithm, a random forest algorithm, dimensionality reduction algorithms, and/or a Gradient Boost & Adaboost algorithm) running on said local processor (Fig. 7, #720 called one or more processor(s). Paragraph [0064]. In paragraph [0015]-SHIREY discloses one or more components may reside within a process and/or thread of execution and a component may be localized on one computer. Further in paragraph [0019]-SHIREY discloses the system 100 is a component of a user's computer (e.g., as a plug-in to a web browser). In some embodiments, the system 100 is available as a service (e.g., cloud-based service) to a user (e.g., filtering performed remotely prior to information being sent to user's computer). In some embodiments, portion(s) of the system are resident on the user's computer and portion(s) of the system 100 are available as a service (e.g., cloud-based service)) in response to said prefiltering determining that said sample is a likely candidate and taking an action in real time in response to when said further analyzing identifies the objectionable content in the sample (Fig. 4. Paragraph [0021]-SHIREY discloses the input component 110 can provide the received document and/or particular element(s) of the received document to the filter component 120 which uses the model 130 (e.g., statistical learning system such as a classifier) to filter the element(s) determined to likely comprise non-desired content. “Non-desired content” refers to text, image(s), video(s) and/or audio which the filter component 120 determines a user of the system 100 would likely not desire to be presented (e.g., displayed). Based upon content of elements the particular user has previously indicated as non-desirable, the system 100 can calculate an approximate probability that an element of a newly received document (e.g., web page) comprises content which the particular user would likely not desire to be presented.. Further in paragraph [0024]-SHIREY discloses particular element(s) to be filtered can be determined based upon a context associated with a web browsing session. When browsing a particular site and/or type of site (e.g., trusted site), a first element selection approach can be applied by the input component 140. However, when browsing a different particular site and/or different type of site (e.g., an other than trusted site), a second element selection approach can be applied by the input component 140 (wherein the input component 110 determines whether the site is trustworthy, which corresponds to the element selection approach taken). Please also read paragraph [0027-0029, 0030 and 0033]).
Applicant argues on page 6, “HOLM describes various preprocessing techniques but does not distinctly disclose the concept of prefiltering to identify likely candidates of objectionable content followed by full analysis of only the identified candidates of the prefiltering result. HOLM focuses on preprocessing and analyzing content but lacks the specific step of eliminating samples from further local processing based on prefiltering. SHIREY's system also involves a single filtering step where elements of a document are scored for non-desirable content, and actions are taken based on the score”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 6, “HOLM describes various preprocessing techniques but does not distinctly disclose the concept of prefiltering to identify likely candidates of objectionable content followed by full analysis of only the identified candidates of the prefiltering result. HOLM focuses on preprocessing and analyzing content but lacks the specific step of eliminating samples from further local processing based on prefiltering. SHIREY's system also involves a single filtering step where elements of a document are scored for non-desirable content, and actions are taken based on the score”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 6, “Since neither HOLM nor SHIREY teach or suggest the recited sequence of prefiltering and further analysis only of candidates identified in the prefiltering, the combination of HOLM and SHIREY does not provide any basis for the specific sequence of filtering steps recited in claim 1, particularly the elimination of non- candidates in a prefiltering step followed by further local analysis of identified candidates.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 7, “Furthermore, to the degree that SHIREY discusses local filtering of content it is based on calculated scores of text in a document indicative of non-desirable content in that is mixed with the text. SHIREY does not disclose efficient prefiltering of large volumes of images produced locally with no text to determine the likelihood of objectionable content and control the subsequent real-time processing of the images.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 7, “The real-time management of resources for a cell phone filtering pure images wherein only relevant samples undergo intensive analysis is not taught or suggested by the combination of HOLM and SHIREY.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 7, “While the Examiner suggests that HOLM teaches local content filtering of images on a mobile device, HOLM does not explicitly address the constraints and capabilities of small processors on mobile devices nor does it disclose the specific method of real-time processing of images on a low-power device as claimed.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 7, “While the Examiner suggests that HOLM teaches local content filtering of images on a mobile device, HOLM does not explicitly address the constraints and capabilities of small processors on mobile devices nor does it disclose the specific method of real-time processing of images on a low-power device as claimed.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 8, “In conclusion, the claimed method of local content filtering of images for a mobile device, involving prefiltering to determine the likelihood of objectionable content, real-time response actions, and further analysis using an artificial intelligence routine, is both novel and non-obvious over the combination of HOLM and SHIREY.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 8, “Based on the arguments presented, claim 1 is considered allowable over the combination of HOLM and SHIREY.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 8, “Claims 2, 6, 12, 15 and 21 are considered allowable at least due to their dependence on allowable claim 1.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 8, “Claim 17 is considered allowable over HOLM and SHIREY at least for the same reasons as claim 1 mutatts mutands. Claims 18-20 and 22 are considered allowable at least due to their dependence on allowable claim 17.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 8, “LORD does not remedy the deficiency of HOLM and SHIREY. Nowhere does LORD teach or imply a mobile device performing locally a two-step image analyze and/or content filtering of images using only local resources as recited in claims 1 and 17 as currently amended.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 9, “Based on the arguments presented, independent claims 1 and 17 as currently amended and hence dependent claims 2, 6, 7, 12, 15 and 18-22 which depend therefrom are considered allowable over HOLM, SHIREY and LORD.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 9, “DAY does not remedy the deficiency of HOLM and SHIREY. Nowhere does DAY teach or imply two step analysis of images on locally on a mobile device as recited in claims 1 and 17 as currently amended. Just the opposite in the abstract and throughout the application of DAY is clearly taught that analysis of content would be not on the user's device (smart phone) but on an analysis server, which is clearly a separate computing system as illustrated clearly, for example in DAY FIG. 1.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 10, “Based on the arguments presented, independent claims 1 and 17 as currently amended and hence dependent claims 2, 6, 7, 11, 12, 15 and 18-22 which depend therefrom are considered allowable over HOLM, SHIREY, LORD and DAY.”
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 10, “DAS does not remedy the deficiency of HOLM and SHIREY. The Applicant does not find in DAS any teaching or suggestion of two step analysis of content performed locally on a mobile device. Content filtering of DAS appears to be performed on a network-based system with a lot of network and local resources. DAS does not appear to teach local screening of live video images or games.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 10, “Based on the arguments presented, independent claims 1 and 17 as currently amended and hence dependent claims 2, 6, 12, 15, 16 and 18-22 which depend therefrom are considered allowable over HOLM, SHIREY, LORD, DAY and DAS.”
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Applicant argues on page 11, “In view of the above amendments and remarks it is respectfully submitted that claims 1, 2, 6, 7, 11, 12, and 15-22 are now in condition for allowance. A prompt notice of allowance is respectfully and earnestly solicited.”.
In response, the Office respectfully does not find this argument persuasive for the reasons stated above and below.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 6, 12, 15, 17-22 are rejected under 35 U.S.C. 103 as being unpatentable over HOLM et al. (US 20200302029 A1), hereinafter referenced as HOLM in view of SHIREY et al. (US 20190228103 A1), hereinafter referenced as SHIREY.
Regarding claim 1, HOLM explicitly teaches a method of local content filtering of images (Fig. 1. Paragraph [0106]-HOLM discloses FIG. 2 is a diagram showing a schematic of an exemplary processing flow of the present Intelligent Computer Vision System 54, installed on the Computing Device 52 of FIG. 1. Further in paragraph [0116]-HOLM discloses the present systems and methods for monitoring and/or filtering images of a selected computing device user uses a MLIC algorithm (e.g. CNN) (wherein images are classified as clean or non-clean). The computing device may be a cluster of devices including at least one of individual computers, remote servers, other devices capable of communicating interactively with said computing device, and mobile devices) for a mobile device (Fig. 1. Paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device. A local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices) comprising:
sampling (Fig. 1. Paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device) from a screen (Fig. 7. Paragraph [0122]-HOLM discloses the present systems and methods can receive the image directly from the Screen Capture Processor (that is, the numerical encoding used by the screen used by the user to view the image, said numerical encoding used by the screen to represent, store, and display the raw pixel information comprising visual media, such as still images, video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like. In paragraph [0123]-HOLM discloses the present systems and methods may receive video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like, and samples only a portion (for instance, in a video stream, sampling interval may be every half-second)) of the mobile device (Fig. 1. Paragraph [0106]-HOLM discloses a local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices, and other such computing devices) a live image (Fig. 1. Paragraph [0106]-HOLM discloses visual media may include still images, video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like) obtained from a local camera of the mobile device, said sampling to produce a sample (Fig. 1. Paragraph [0019]-HOLM discloses the received images comprise at least one of: screen data; data of image files stored in the memory of said computing device; data from a camera; data sent from a device capable of sending images; data from an HDMI processor; data sent from a device capable of sending videos; data sent from a device capable of sending analog images; data from another computing device. Further in paragraph The Image access activity may include one or more of a still image, video content, video frames, holographic images, other 3-dimensional images, virtual reality images, other such content, or combination of two or more of such content.).
prefiltering the sample using a local processor and locally stored software (Fig. 1. Paragraph [0105]-HOLM discloses the image access activity can include access of at least one image from still images, video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like. A method of using a monitoring system can include the steps of a computing device user voluntarily installing a monitoring program, or alternatively having a monitoring program pre-installed on a selected computing device, recording the Image access activity, and blocking (filtering) said image and/or providing the recorded information to a third party recipient) to determine whether the sample is a likely candidate to include objectionable content (Fig. 1. Paragraph [0151]-HOLM discloses the process advances at step 210 to image classifier 202 in order to classify the image at 212 by way of step 214 to determine at step 216 whether the image is scored as clean or non-clean (or however the predetermined threshold classification is set). The Image Classifier 202 analyzes input images using a known MLIC algorithm, such as a convolutional neural network (CNN) model. In paragraph [0157]-HOLM discloses if at step 240, a decision not to block the image is determined by the Image Processor 204, then, via output 242, the process flow advances to a determination 246 of whether to obscure the image. Please also read paragraph [0228-0229]).
HOLM fails to explicitly teach responding in real time to said prefiltering by: eliminating said sample from further processing in response to said prefiltering determining that said sample is not a likely candidate and further analyzing the sample for the objectionable content using an artificial intelligence routine running on said local processor in response to said prefiltering determining that said sample is a likely candidate and taking an action in real time in response to when said further analyzing identifies the objectionable content in the sample.
However, SHIREY explicitly teaches responding in real time to said prefiltering (Fig. 1. Paragraph [0017]-SHIREY discloses referring to FIG. 1, a system for filtering content 100 is illustrated. Prior to presentation (e.g., display) of content (e.g., web content), the system 100 can determine which element(s) of a received document (e.g., web page) comprise non-desired content (e.g., a particular user would likely find to be non-desired) and takes an action (e.g., removing, blocking and/or graying) with regard to the determined element(s).) by:
eliminating said sample from further processing in response to said prefiltering determining that said sample is not a likely candidate and further analyzing the sample for the objectionable content using an artificial intelligence routine (Fig. 1. Paragraph [0026]-SHIREY discloses the filter component 120 applies the model 130 using one or more machine learning algorithms including linear regression algorithms, logistic regression algorithms, decision tree algorithms, support vector machine (SVM) algorithms, Naive Bayes algorithms, a K-nearest neighbors (KNN) algorithm, a K-means algorithm, a random forest algorithm, dimensionality reduction algorithms, and/or a Gradient Boost & Adaboost algorithm) running on said local processor (Fig. 7, #720 called one or more processor(s). Paragraph [0064]. In paragraph [0015]-SHIREY discloses one or more components may reside within a process and/or thread of execution and a component may be localized on one computer. Further in paragraph [0019]-SHIREY discloses the system 100 is a component of a user's computer (e.g., as a plug-in to a web browser). In some embodiments, the system 100 is available as a service (e.g., cloud-based service) to a user (e.g., filtering performed remotely prior to information being sent to user's computer). In some embodiments, portion(s) of the system are resident on the user's computer and portion(s) of the system 100 are available as a service (e.g., cloud-based service)) in response to said prefiltering determining that said sample is a likely candidate and taking an action in real time in response to when said further analyzing identifies the objectionable content in the sample (Fig. 4. Paragraph [0021]-SHIREY discloses the input component 110 can provide the received document and/or particular element(s) of the received document to the filter component 120 which uses the model 130 (e.g., statistical learning system such as a classifier) to filter the element(s) determined to likely comprise non-desired content. “Non-desired content” refers to text, image(s), video(s) and/or audio which the filter component 120 determines a user of the system 100 would likely not desire to be presented (e.g., displayed). Based upon content of elements the particular user has previously indicated as non-desirable, the system 100 can calculate an approximate probability that an element of a newly received document (e.g., web page) comprises content which the particular user would likely not desire to be presented.. Further in paragraph [0024]-SHIREY discloses particular element(s) to be filtered can be determined based upon a context associated with a web browsing session. When browsing a particular site and/or type of site (e.g., trusted site), a first element selection approach can be applied by the input component 140. However, when browsing a different particular site and/or different type of site (e.g., an other than trusted site), a second element selection approach can be applied by the input component 140 (wherein the input component 110 determines whether the site is trustworthy, which corresponds to the element selection approach taken). Please also read paragraph [0027-0029, 0030 and 0033]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HOLM of having a method of local content filtering of images for a mobile device, with the teachings of SHIREY of having responding in real time to said prefiltering by: eliminating said sample from further processing in response to said prefiltering determining that said sample is not a likely candidate and further analyzing the sample for the objectionable content using an artificial intelligence routine running on said local processor in response to said prefiltering determining that said sample is a likely candidate and taking an action in real time in response to when said further analyzing identifies the objectionable content in the sample.
Wherein HOLM’s method having responding in real time to said prefiltering by: eliminating said sample from further processing in response to said prefiltering determining that said sample is not a likely candidate and further analyzing the sample for the objectionable content using an artificial intelligence routine running on said local processor in response to said prefiltering determining that said sample is a likely candidate and taking an action in real time in response to when said further analyzing identifies the objectionable content in the sample.
The motivation behind the modification would have been to obtain a method that improves the classification performance for objectionable content, since both HOLM and SHIREY concern content filtering applications. Wherein HOLM provides systems and methods that improves the identification, classification and blocking performance of objectionable content by proposing sub-regions of an input image, constructing and improved set of records and training databases, and using multiple methods for sampling such as screen capture, text, images, metadata as opposed to the prior art which often only uses text and/or metadata, while SHIREY provides systems and methods for filtering an element of a document (e.g., web page) based upon a determined score that can be adapted to users and minimizes performance impacts. Please see HOLM et al. (US 20200302029 A1), Abstract and Paragraph [0109, 0147 and 0151] and SHIREY et al. (US 20190228103 A1), Abstract and Paragraph [0022 and 0035].
Regarding claim 2, HOLM in view of SHIREY in view of explicitly teach the method of claim 1, HOLM further teaches further comprising sampling only a portion of the screen (Fig. 1. Paragraph [0147]-HOLM discloses a known Region Proposal Algorithm (see glossary) may be used to improve classification performance by proposing sub-regions of an input image for classification. The MLIC algorithm independently classifies each proposed image sub-region as clean or not-clean. Please also read paragraph [0078 and 0123]).
Regarding claim 6, HOLM in view of SHIREY in view of explicitly teach the method of claim 1, HOLM fails to explicitly teach further comprising checking a resource availability on the device and performing the analyzing when there are at least a minimum free resource and not performing said analyzing when there are less than said minimum free resources.
However, SHIREY explicitly teaches further comprising checking a resource availability on the device and performing the analyzing when there are at least a minimum free resource and not performing said analyzing when there are less than said minimum free resources (Fig. 1. Paragraph [0023]-SHIREY discloses particular element(s) to be filtered can be determined based upon computing resources available to the system 100). For example, in order to minimize user frustration during peak processing times, the input component 110 can selectively apply the filter component 120 to particular elements (e.g., randomly chosen, based upon storage size of particular element, based upon display area associated with particular element) and provide any remaining element(s) directly to the output component 140 for rendering on a display to the user).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HOLM in view of SHIREY of having a method of local content filtering of images for a mobile device, with the teachings of SHIREY of having further comprising checking a resource availability on the device and performing the analyzing when there are at least a minimum free resource and not performing said analyzing when there are less than said minimum free resources.
Wherein HOLM’s method having further comprising checking a resource availability on the device and performing the analyzing when there are at least a minimum free resource and not performing said analyzing when there are less than said minimum free resources.
The motivation behind the modification would have been to obtain a method that improves the filtering performance for objectionable content, since both HOLM and SHIREY concern content filtering applications. Wherein HOLM provides systems and methods that improves the identification, classification and blocking performance of objectionable content by proposing sub-regions of an input image, constructing and improved set of records and training databases, and using multiple methods for sampling such as screen capture, text, images, metadata as opposed to the prior art which often only uses text and/or metadata, while SHIREY provides systems and methods for filtering an element of a document (e.g., web page) based upon a determined score that can be adapted to users and minimizes performance impacts. Please see HOLM et al. (US 20200302029 A1), Abstract and Paragraph [0109, 0147 and 0151] and SHIREY et al. (US 20190228103 A1), Abstract and Paragraph [0022 and 0035].
Regarding claim 12, HOLM in view of SHIREY explicitly teach the method of claim 1, HOLM further teaches wherein said prefiltering is performed by pretrained routines (Fig. 2. Paragraph [0115]- HOLM discloses the Image Classifier may implement a known MLIC algorithm, such as a Convolutional Neural Network (CNN, defined in glossary). The implementer then trains the MLIC algorithm (e.g. CNN model) on the resulting human-reviewed (or other automated review techniques) sample image sets (e.g. 1,000,000 images per class) using known procedures (in the embodiments using CNN, ref. CNN article). Please also read paragraph [0228-0231]).
Regarding claim 15, HOLM in view of SHIREY explicitly teach the method of claim 1, HOLM further teaches wherein the software includes a hash function or signatures for generating signatures for recognition of newly recognized undesirable content (Fig. 1. Paragraph [0153] If the image is determined by the Image Classifier 202 to not be clean at step 216, the system via output 218 determines whether to collect metadata at 222. It is desirable to collect metadata about the image, such as a hash for cross-referencing DVD content with online movie databases. Further in paragraph [0132]-HOLM discloses the MLIC and the software may periodically be updated with new known images or videos. When more than one spectrum and more than one score of said spectra of clean vs. non-clean are individually, by at least one of a serial determination and parallel determination, processed by said image monitoring software. The process may include using the resulting analysis of each of said spectra, and said analysis may be weighted to provide the score, with said score being a summary of said weighted spectra.).
Regarding claim 17, HOLM explicitly teaches a system (Fig. 1, #52 called a computing device. Paragraph [0144]) for local content filtering of images (Fig. 1. Paragraph [0106]-HOLM discloses FIG. 2 is a diagram showing a schematic of an exemplary processing flow of the present Intelligent Computer Vision System 54, installed on the Computing Device 52 of FIG. 1. Further in paragraph [0116]-HOLM discloses the present systems and methods for monitoring and/or filtering images of a selected computing device user uses a MLIC algorithm (e.g. CNN) (wherein images are classified as clean or non-clean). The computing device may be a cluster of devices including at least one of individual computers, remote servers, other devices capable of communicating interactively with said computing device, and mobile devices) for a mobile device (Fig. 1. Paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device. A local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices) comprising:
computer code for sampling (Fig. 1. Paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device) from a screen (Fig. 1. Paragraph [0122]-HOLM discloses the present systems and methods can receive the image directly from the Screen Capture Processor (that is, the numerical encoding used by the screen used by the user to view the image, said numerical encoding used by the screen to represent, store, and display the raw pixel information comprising visual media, such as still images, video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like. In paragraph [0123]-HOLM discloses the present systems and methods may receive video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like, and samples only a portion (for instance, in a video stream, sampling interval may be every half-second)) of the mobile device (Fig. 1. Paragraph [0106]-HOLM discloses a local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices, and other such computing devices) a live image (Fig. 1. Paragraph [0106]-HOLM discloses visual media may include still images, video streams, video frames, holographic images, other 3-dimensional images, virtual reality images, and the like) obtained from a local camera of the mobile device, said sampling to produce a sample (Fig. 1. Paragraph [0019]-HOLM discloses the received images comprise at least one of: screen data; data of image files stored in the memory of said computing device; data from a camera; data sent from a device capable of sending images; data from an HDMI processor; data sent from a device capable of sending videos; data sent from a device capable of sending analog images; data from another computing device. Further in paragraph The Image access activity may include one or more of a still image, video content, video frames, holographic images, other 3-dimensional images, virtual reality images, other such content, or combination of two or more of such content) and prefiltering the sample using a local processor (Fig. 6. Paragraph [0150]-HOLM discloses the Image Processor 204 captures a new image. In paragraph [0151]-HOLM discloses the process advances at step 210 to image classifier 202 in order to classify the image at 212 by way of step 214 to determine at step 216 whether the image is scored as clean or non-clean (or however the predetermined threshold classification is set) (wherein a region proposal algorithm proposes sub-regions for classification by a convolutional neural network). In paragraph [0152]-HOLM discloses if the image is determined by the Image Classifier 202 to be clean at step 216, the Image Classifier 202, via 220, forwards the image to the Image Processor 204 for output 254, after which the image is advanced via output 258 to complete the image processing 260, after which the process 200 proceeds via output 262 of the Reporting Agent 206 to return back to the beginning at 264 to capture a new image 208 via output 266. Please also read paragraph [0228-0231]) and locally stored software (Fig. 6. Paragraph [0217]-HOLM discloses computing device 610 has installed an operating system 620 which may be a hardware or software operating system. The operating system 620 has installed on it a Windowing System 614 and a Screen Buffer 622 which is communicatively connected to the image output device 626. In addition, as shown in FIG. 6, an Obscuring and Analysis System (OAAS) 628 is installed on the computing device 610. The OAAS 628 is communicatively connected to, or optionally has installed within the OAAS 628, an image classifier 616 which may be also be installed on the computing device 610. In paragraph [0239]-HOLM discloses systems and methods for monitoring use of a selected user may have a computing device having an image output device and also having an Obscuring and Analysis System (OAAS) installed thereon; wherein said OAAS may be software or hardware. The monitoring program is further configured to record the results of the monitoring of the network access activity locally on the computing device and/or at a remote server and/or service) to determe whether the sample is a likely candidate to include objectionable content (Fig. 8. Paragraph [0228]-HOLM discloses in output 818, OAAS 822 analyzes the image 802 using data from screen buffer 816 and/or image output device 820, and, using image classifier 824 determines if image 802 is clean or non-clean. In paragraph [0229]-HOLM discloses FIG. 9 illustrates actions taken for a clean image following an image classified as non-clean. When alpha-blending is used to obscure image 902, the alpha-blended image becomes the controlled image. Windowing system 910 implements appropriate actions to pass the controlled image to screen buffer 914, which in turn allows image output device 928 to display the controlled image. When alpha-blending is used to create the controlled image, OAAS 918 reverse-alpha blends the controlled image to recover image 902 using data from windowing system 910 and/or screen buffer 914 and/or image output device 928, and, using image classifier 920 determines if image 902 is clean (922) or non-clean (926). Because image 902 was clean, no action is taken for the next input image (924). The controlled image is removed and replaced by the recovered image 902 before the next input image is input into image input device 906 (wherein classification of images is performed using a convolutional neural network). In paragraph [0115]-HOLM discloses the Image Classifier may implement a known MLIC algorithm, such as a Convolutional Neural Network. Please also read paragraph [0064, 0151 and 0157]);
HOLM fails to explicitly teach responding in real time to said prefiltering by: eliminating the sample from further processing in response to said prefiltering determining that the sample is not a likely candidate and an artificial intelligence routine running on said local processor analyzing the sample for the objectionable content in response to said determining that said sample is a likely candidate; wherein the system is a self-contained application and is further configured taking an action in real time in response to identifying the objectionable content in said analyzing.
However, SHIREY explicitly teaches responding in real time to said prefiltering Fig. 1. Paragraph [0017]-SHIREY discloses referring to FIG. 1, a system for filtering content 100 is illustrated. Prior to presentation (e.g., display) of content (e.g., web content), the system 100 can determine which element(s) of a received document (e.g., web page) comprise non-desired content (e.g., a particular user would likely find to be non-desired) and takes an action (e.g., removing, blocking and/or graying) with regard to the determined element(s)) by:
eliminating the sample from further processing in response to said prefiltering determining that the sample is not a likely candidate and an artificial intelligence routine (Fig. 4. Paragraph [0026]-SHIREY discloses the filter component 120 applies the model 130 using one or more machine learning algorithms including linear regression algorithms, logistic regression algorithms, decision tree algorithms, support vector machine (SVM) algorithms, Naive Bayes algorithms, a K-nearest neighbors (KNN) algorithm, a K-means algorithm, a random forest algorithm, dimensionality reduction algorithms, and/or a Gradient Boost & Adaboost algorithm) running on said local processor (Fig. 4, # #720 called one or more processor(s). Paragraph [0064]. In paragraph [0015]-SHIREY discloses one or more components may reside within a process and/or thread of execution and a component may be localized on one computer) analyzing the sample for the objectionable content (Fig. 4. Paragraph [0021]-SHIREY discloses the input component 110 can provide the received document and/or particular element(s) of the received document to the filter component 120 which uses the model 130 (e.g., statistical learning system such as a classifier) to filter the element(s) determined to likely comprise non-desired content. Further in paragraph [0044]-SHIREY discloses the method 400 is performed by system for filtering content 100. Please also read paragraph [0021, 0024, 0027]) in response to said determining that said sample is a likely candidate (Fig. 4. Paragraph [0021]-SHIREY discloses the input component 110 can provide the received document and/or particular element(s) of the received document to the filter component 120 which uses the model 130 (e.g., statistical learning system such as a classifier) to filter the element(s) determined to likely comprise non-desired content. Further in paragraph [0024]-SHIREY discloses particular element(s) to be filtered can be determined based upon a context associated with a web browsing session. When browsing a particular site and/or type of site (e.g., trusted site), a first element selection approach can be applied by the input component 140. However, when browsing a different particular site and/or different type of site (e.g., an other than trusted site), a second element selection approach can be applied by the input component 140 (wherein the input component 110 determines whether the site is trustworthy, which corresponds to the element selection approach taken). Please also read paragraph [0027-0029 and 0033]);
wherein the system is a self-contained application (Fig. 1. Paragraph [0015]-SHIREY discloses the terms “component” and “system,” as well as various forms thereof (e.g., components, systems and/or sub-systems) are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. A component may be, but is not limited to being, a process running on a processor, a processor, an object, an instance, an executable, a thread of execution, a program, and/or a computer. Both an application running on a computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers) or exists as an add-on to an existing application (Fig. 1. Paragraph [0019]-SHIREY discloses the system 100 may be a component of a user's computer (e.g., as a plug-in to a web browser)) and is further configured taking an action in real time in response to identifying the objectionable content in said analyzing (Fig. 1. Paragraph [0021]-SHIREY discloses the model 130 can be adapted based on input received from a particular user. Based upon content of elements the particular user has previously indicated as non-desirable, the system 100 can calculate an approximate probability that an element of a newly received document (e.g., web page) comprises content which the particular user would likely not desire to be presented. The filter component 120 can utilize a speech recognizer, an image recognizer and/or an audio recognizer. In paragraph [0027]-SHIREY discloses the filter component 120 applies a scoring algorithm (e.g., model 130) to calculate a score indicative of whether an element comprises non-desired content. When the calculated score that a particular element comprises non-desired content is greater than or equal to a threshold, the filter component 120 takes an action regarding the particular element. In paragraph [0030]-SHIREY discloses in response to determining that a particular element comprises non-desired content, the output component 140 can take an action with respect to the particular element. The particular element determined to comprise non-desired content is removed with the remainder of the received document is displayed (wherein scoring and removing content as a user accesses a document and prior to the display of the document constitutes a real time response). Please also read paragraph [0017, 0045, and 0047]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HOLM of having a system for local content filtering of images for a mobile device, with the teachings of SHIREY of having responding in real time to said prefiltering by: eliminating the sample from further processing in response to said prefiltering determining that the sample is not a likely candidate and an artificial intelligence routine running on said local processor analyzing the sample for the objectionable content in response to said determining that said sample is a likely candidate; wherein the system is a self-contained application and is further configured taking an action in real time in response to identifying the objectionable content in said analyzing.
Wherein HOLM’s system having responding in real time to said prefiltering by: eliminating the sample from further processing in response to said prefiltering determining that the sample is not a likely candidate and an artificial intelligence routine running on said local processor analyzing the sample for the objectionable content in response to said determining that said sample is a likely candidate; wherein the system is a self-contained application and is further configured taking an action in real time in response to identifying the objectionable content in said analyzing.
The motivation behind the modification would have been to obtain a system that improves the filtering performance for objectionable content, since both HOLM and SHIREY concern content filtering applications. Wherein HOLM provides systems and methods that improves the identification, classification and blocking performance of objectionable content by proposing sub-regions of an input image, constructing and improved set of records and training databases, and using multiple methods for sampling such as screen capture, text, images, metadata as opposed to the prior art which often only uses text and/or metadata, while SHIREY provides systems and methods for filtering an element of a document (e.g., web page) based upon a determined score that can be adapted to users and minimizes performance impacts. Please see HOLM et al. (US 20200302029 A1), Abstract and Paragraph [0090, 109, 0147 and 0151] and SHIREY et al. (US 20190228103 A1), Abstract and Paragraph [0022 and 0035].
Regarding claim 18, HOLM in view of SHIREY explicitly teach the system of claim 17, HOLM further teaches wherein the application is self-updating when new content is detected or updates of said application are including with updates of the device or the existing application (Fig. 1. Paragraph [0167]-HOLM discloses the system allows for online training (see glossary), wherein the Image Classifier 416 may be updated on the basis of new training images stored in the Database for Non-Clean 420. A copy of the model is retrained with the additional training images, at which time the retrained model replaces the old model, so as not to interrupt system flow. Please also read paragraph [0134, 0136, 0140, and 0167).
Regarding claim 19, HOLM in view of SHIREY explicitly teach the system of claim 17, HOLM fails to explicitly teach wherein the prefiltering includes prefiltering that uses low computational cost methodologies to eliminate images with a low likelihood of undesirable content.
However, SHIREY explicitly teaches wherein the prefiltering includes prefiltering that uses low computational cost methodologies to eliminate images with a low likelihood of undesirable content (Fig. 4. Paragraph [0022]-SHIREY discloses in some embodiments, the input component 110 can utilize JavaScript and a jQuery element selector for determining elements within a document hierarchy of a received document in order to identify which element(s) within the received document (e.g., webpage) to provide to the filter component 120. In some embodiments, specific kind(s) of element(s) can be specified (e.g., hard-coded and/or user-specified) in order to minimize performance impacts of filtering objects (e.g., elements) in the document (e.g., webpage). Please also read paragraph [0023]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HOLM in view of SHIREY of having a system for local content filtering of images for a mobile device comprising, with the teachings of SHIREY of having wherein the prefiltering includes prefiltering that uses low computational cost methodologies to eliminate images with a low likelihood of undesirable content
Wherein HOLM’s system having wherein the prefiltering includes prefiltering that uses low computational cost methodologies to eliminate images with a low likelihood of undesirable content
The motivation behind the modification would have been to obtain a system that improves the classification performance for objectionable content, since both HOLM and SHIREY concern content filtering applications. Wherein HOLM provides systems and methods that improves the identification, classification and blocking performance of objectionable content by proposing sub-regions of an input image, constructing and improved set of records and training databases, and using multiple methods for sampling such as screen capture, text, images, metadata as opposed to the prior art which often only uses text and/or metadata, while SHIREY provides systems and methods for filtering an element of a document (e.g., web page) based upon a determined score that can be adapted to users and minimizes performance impacts. Please see HOLM et al. (US 20200302029 A1), Abstract and Paragraph [0090, 109, 0147 and 0151] and SHIREY et al. (US 20190228103 A1), Abstract and Paragraph [0022 and 0035].
Regarding claim 20, HOLM in view of SHIREY explicitly teach the system of claim 19, although HOLM explicitly teaches meta data (Fig. 1. Paragraph [0134]-HOLM discloses the image monitoring software also captures metadata about the image. The metadata may be used by the MLIC to help determine a score. The metadata may include at least one of filename, timestamp, title, description, tags, source code, and hash. Further in paragraph [0153]-HOLM discloses if the image is determined by the Image Classifier 202 to not be clean at step 216, the system via output 218 determines whether to collect metadata at 222).
HOLM fails to explicitly teaches wherein the prefiltering using meta data to set a sampling rate and/or low resource prefilter.
However, SHIREY explicitly teaches wherein the prefiltering set a sampling rate and/or low resource prefilter (Fig. 1. Paragraph [0022]-SHIREY discloses the input component 110 can utilize JavaScript and a jQuery element selector for determining elements within a document hierarchy of a received document in order to identify which element(s) within the received document (e.g., webpage) to provide to the filter component 120. In some embodiments, specific kind(s) of element(s) can be specified (e.g., hard-coded and/or user-specified) in order to minimize performance impacts of filtering objects (e.g., elements) in the document (e.g., webpage). Further in paragraph [0023]-SHIREY discloses particular element(s) to be filtered can be determined based upon computing resources available to the system 100. For example, in order to minimize user frustration during peak processing times, the input component 110 can selectively apply the filter component 120 to particular elements (e.g., randomly chosen, based upon storage size of particular element, based upon display area associated with particular element) and provide any remaining element(s) directly to the output component 140 for rendering on a display to the user).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HOLM in view of SHIREY of having a system for local content filtering of images for a mobile device, with the teachings of SHIREY of having wherein the prefiltering using meta data to set a sampling rate and/or low resource prefilter.
Wherein HOLM’s system having wherein the prefiltering using meta data to set a sampling rate and/or low resource prefilter.
The motivation behind the modification would have been to obtain a system that improves the classification performance for objectionable content, since both HOLM and SHIREY concern content filtering applications. Wherein HOLM provides systems and methods that improves the identification, classification and blocking performance of objectionable content by proposing sub-regions of an input image, constructing and improved set of records and training databases, and using multiple methods for sampling such as screen capture, text, images, metadata as opposed to the prior art which often only uses text and/or metadata, while SHIREY provides systems and methods for filtering an element of a document (e.g., web page) based upon a determined score that can be adapted to users and minimizes performance impacts. Please see HOLM et al. (US 20200302029 A1), Abstract and Paragraph [0090, 109, 0147 and 0151] and SHIREY et al. (US 20190228103 A1), Abstract and Paragraph [0022 and 0035].
Regarding claim 21, HOLM in view of SHIREY explicitly teach the method of claim 1, HOLM further teaches wherein the mobile device is a cellular phone (Fig. 1. Paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device. A local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices).
Regarding claim 22, HOLM in view of SHIREY explicitly teach the system of claim 17, HOLM further teaches wherein the mobile device is a cellular phone (Fig. 1. Paragraph [0106]-HOLM discloses the present embodiments provide an image monitoring and/or blocking system and method configured to block and/or monitor and record image-access activities of a particular computing device. A local computing device can be a computer, laptop, television, monitor, a mobile personal user interface unit or device, such as but not limited to a smart phone, a tablet, and other such mobile devices)
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over HOLM et al. (US 20200302029 A1), hereinafter referenced as HOLM in view of SHIREY et al. (US 20190228103 A1), hereinafter referenced as SHIREY and in further view of LORD et al. (US 20110034176 A1), hereinafter referenced as LORD.
Regarding claim 7, HOLM in view of SHIREY explicitly teach the method of claim 1, although HOLM explicitly teaches wherein said prefiltering includes at least one of image resolution adjustment (Fig. 1. Paragraph [0072]-HOLM discloses changes may include change in resolution, color, aspect ratio, contrast, content, and the like, as well as obscuring or blocking or filtering or replacing or any other predetermined image altering, and image removal from a device's memory) and signature detection (Fig. 1. Paragraph [0134]-HOLM discloses the metadata may include hash from, for instance, a DVD used for cross-referencing with online movie databases (wherein a hash is a signature)).
HOLM in view of SHIREY fails to explicitly teach white balance correction, a gamma correction, an edge enhancement, image resolution adjustment, an FFT, edge detection, pattern extraction, texture classification, a color histogram, motion detection, feature recognition, entropy measuring, signature detection and skin tone recognition.
However, LORD explicitly teaches white balance correction (Fig. 50. Paragraph [0675]-LORD discloses the present technology performs one or more visual intelligence pre-processing operations on image information captured by a camera sensor. These operations may be performed without user request, and before other image processing operations that the camera customarily performs. In paragraph [0679]-LORD discloses another common operation is white balance correction. This process adjusts the intensities of the component R/G/B colors in order to render certain colors (especially neutral colors) correctly (wherein white balance correction is one of multiple preprocessing operations that may be performed). Please also read paragraph [0707 and 0710]), a gamma correction (Fig. 50. Paragraph [0680]-LORD discloses other operations that may be performed include gamma), an edge enhancement (Fig. 50. Paragraph [0680]-LORD discloses other operations that may be performed include edge enhancement.), image resolution adjustment (Fig. 50. Paragraph [0144]-LORD discloses Module 34 may further author the field 55 to specify that the sensor is to sum sensor charges to reduce resolution (e.g., producing a frame of 640.times.480 data from a sensor capable of 1280.times.960. Please also read paragraph [0199]), an FFT (Fig. 50. Paragraph [0162]-LORD discloses each of the processing stages 38 comprises hardware circuitry dedicated to a particular task. The third stage 38c may be a dedicated FFT processor), edge detection (Fig. 50. Paragraph [0162]-LORD discloses each of the processing stages 38 comprises hardware circuitry dedicated to a particular task. The first stage 38 may be a dedicated edge-detection processor. Please also read paragraph [0172]), pattern extraction (Fig. 50. Paragraph [0162]-LORD discloses each of the processing stages 38 comprises hardware circuitry dedicated to a particular task. Stages may be dedicated to other processes. These may include stages for performing all or part of operations such as facial recognition, optical character recognition, computation of eigenvalues, extraction of shape, color and texture feature data, barcode decoding, watermark decoding, object segmentation, pattern recognition, age and gender detection. Further in paragraph [0897]-LORD discloses the processing of imagery contemplated in this specification can use of various other techniques, which can go by various names. Included are image analysis, pattern recognition, feature extraction, feature detection, template matching, facial recognition, eigenvectors, etc. Please also read paragraph [0134), texture classification (Fig. 50. Paragraph [0162]-LORD discloses each of the processing stages 38 comprises hardware circuitry dedicated to a particular task. Stages may be dedicated to other processes. These may include stages for performing all or part of operations such as facial recognition, optical character recognition, computation of eigenvalues, extraction of shape, color and texture feature data, barcode decoding, watermark decoding, object segmentation, pattern recognition, age and gender detection. In paragraph [0529]-LORD discloses this embodiment examines the set of images and determines which image features/characteristics/metrics most reliably (1) group like-categorized images together (similarity); and (2) distinguish differently-categorized images from each other (difference). Among the attributes that may be measured and checked for similarity/difference behavior within the set of images are dominant texture; texture diversity; texture histogram), a color histogram (Fig. 50. Paragraph [0529]-LORD discloses this embodiment examines the set of images and determines which image features/characteristics/metrics most reliably (1) group like-categorized images together (similarity); and (2) distinguish differently-categorized images from each other (difference). Among the attributes that may be measured and checked for similarity/difference behavior within the set of images are dominant color; color diversity; color histogram; dominant texture; texture diversity; texture histogram; edginess; wavelet-domain transform coefficient histograms, and dominant wavelet coefficients; frequency domain transfer coefficient histograms and dominant frequency coefficients (which may be calculated in different color channels)), motion detection (Fig. 50. Paragraph [0718]-LORD discloses detection of motion can be accomplished in the spatial domain (e.g., by reference to movement of feature pixels between frames), or in a transform domain. Fourier transform and DCT data are exemplary. The system may extract the transform domain signature of an image component, and track its movement across different frames--identifying its motion), feature recognition (Fig. 50. Paragraph [0162]-LORD discloses each of the processing stages 38 comprises hardware circuitry dedicated to a particular task. Stages may be dedicated to other processes. These may include stages for performing all or part of operations such as facial recognition, optical character recognition, computation of eigenvalues, extraction of shape, color and texture feature data, barcode decoding, watermark decoding, object segmentation, pattern recognition, age and gender detection. Further in paragraph [0897]-LORD discloses the processing of imagery contemplated in this specification can use of various other techniques, which can go by various names. Included are image analysis, pattern recognition, feature extraction, feature detection, template matching, facial recognition, eigenvectors, etc.), entropy measuring, signature detection (Fig. 50. Paragraph [0718]-LORD discloses the system may extract the transform domain signature of an image component, and track its movement across different frames--identifying its motion. Please also read paragraph [0342] and 0428]) and skin tone recognition (Fig. 50. Paragraph [0492]-LORD discloses a different analysis can be employed to estimate the person-centric-ness of each image in the set obtained from Filckr. In paragraph [0502]-LORD discloses one technique is to analyze the image looking for continuous areas of skin-tone colors. Such features characterize many features of person-centric images, but are less frequently found in images of places and things. Further in paragraph [0162]-LORD discloses each of the processing stages 38 comprises hardware circuitry dedicated to a particular task. Stages may be dedicated to other processes. These may include stages for performing all or part of operations such as facial recognition, extraction of shape, color and texture feature data, and age and gender detection).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HOLM in view of SHIREY of having a method of local content filtering of images for a mobile device, with the teachings of LORD of having wherein said prefiltering includes at least one of white balance correction, a gamma correction, an edge enhancement, image resolution adjustment, an FFT, edge detection, pattern extraction, texture classification, a color histogram, motion detection, feature recognition, signature detection and skin tone recognition.
Wherein having HOLM’s method wherein said prefiltering includes at least one of white balance correction, a gamma correction, an edge enhancement, image resolution adjustment, an FFT, edge detection, pattern extraction, texture classification, a color histogram, motion detection, feature recognition, entropy measuring, signature detection and skin tone recognition.
The motivation behind the modification would have been to obtain method that improves intuitive computing as well as the filtering performance for objectionable content, since both HOLM and LORD concern applications for image analysis. Wherein HOLM provides systems and methods that improves the identification, classification and blocking performance of objectionable content by proposing sub-regions of an input image, constructing and improved set of records and training databases, and using multiple methods for sampling such as screen capture, text, images, metadata as opposed to the prior art which often only uses text and/or metadata, while LORD provide systems and methods for enabling a smart phone to analyze images and sounds in a user’s environment and infer the user's desire in that sensed context, which in turn, results in improvements to intuitive computing and image analysis broadly. Please see HOLM et al. (US 20200302029 A1), Abstract and Paragraph [0109, 0147 and 0151] and LORD (US 20110034176 A1), Abstract and Paragraph [0011-0013, 0839 and 0911].
Claim 11 are rejected under 35 U.S.C. 103 as being unpatentable over HOLM et al. (US 20200302029 A1), hereinafter referenced as HOLM in view of SHIREY et al. (US 20190228103 A1), hereinafter referenced as SHIREY and in further view of DAY (US 20170149795 A1), hereinafter referenced as DAY.
Regarding claim 11, HOLM in view of SHIREY explicitly teach the method of claim 1, HOLM in view of SHIREY fail to explicitly teach wherein said prefiltering and said analyzing are performed by an application running on under an operating system selected from the group consisting of Android, windows and IOS.
However, DAY explicitly teaches wherein said prefiltering and said analyzing (Fig. 38. Paragraph [0155]-DAY discloses images to be blocked, filtered, flagged, or analyzed can also be identified by metadata associated therewith, for example data indicating provenance from a known pornographic website, an “xxx” indication, a title relating to sex acts, a name of a known pornographic actor or actress, a word associated with erotic industries, etc. Please also read paragraph [0122]) are performed by an application running on under an operating system selected from the group consisting of Android, windows and IOS. (Fig. 54. Paragraph [0281]-DAY discloses when application software (“app”) or a plugin 132 corresponding to the mobile safety system 110 is installed on a user system 130, the installation process can create one or more agents or services 5410 to run in the background along with the operating system 5420 (such as Android, iOS, Windows, Linux, etc.) of the user system 130. Further in paragraph [0263]-DAY discloses the user systems described herein can include any type of operating system (“OS”). For example, the mobile computing systems described herein can implement an Android™ OS, a Windows® OS, a Mac® OS, a Linux or Unix-based OS, or the like).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HOLM in view of SHIREY of having a method of local content filtering of images for a mobile device, with the teachings of DAY of having wherein said prefiltering and said analyzing are performed by an application running on under an operating system selected from the group consisting of Android, windows and IOS.
Wherein having HOLM’s method wherein said prefiltering and said analyzing are performed by an application running on under an operating system selected from the group consisting of Android, windows and IOS.
The motivation behind the modification would have been to obtain method that improves productivity and safety as well as the filtering performance for objectionable content, since both HOLM and DAY concern content filtering applications. Wherein HOLM provides systems and methods that improves the identification, classification and blocking performance of objectionable content by proposing sub-regions of an input image, constructing and improved set of records and training databases, and using multiple methods for sampling such as screen capture, text, images, metadata as opposed to the prior art which often only uses text and/or metadata, while DAY’s systems and methods provide a comprehensive suite of tools and functionality for monitoring and filtering that helps improve productivity and safety while also deterring “bad” acts. Please see HOLM et al. (US 20200302029 A1), Abstract and Paragraph [0109, 0147 and 0151] and DAY (US 20170149795 A1), Abstract and Paragraph [0136-0138].
Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over HOLM et al. (US 20200302029 A1), hereinafter referenced as HOLM in view of SHIREY et al. (US 20190228103 A1), hereinafter referenced as SHIREY and in further view of DAS et al. (US 10962939 B1), hereinafter referenced as DAS.
Regarding claim 16, HOLM in view of SHIREY explicitly teach the method of claim 1, although HOLM explicitly teaches wherein personalized instructions define how strictly to sample and analyze content and how to act when objectionable content is identified (Fig. 4. Paragraph [0027]- HOLM discloses this system provides a comprehensive, user-governed architecture to accurately and efficiently capture, identify, filter and/or report objectionable visual content in the user's media stream. In paragraph [0155]-HOLM discloses users may choose to receive scheduled, automated reports on their media viewing history and habits. In paragraph [0178]-HOLM discloses the system allows a user, owner of the computing device, owner of a service providing network access and/or other such entities to establish a set of rules and/or criteria. The present embodiments can then block network access activity when the established rules and/or criteria are met. Please also read paragraph [0061 and 0088]).
HOLM in view of SHIRELY fail to explicitly teach wherein personalized instructions define how strictly to sample, how many resources to use in analysis and analyze content and how to act when objectionable content is identified.
However, DAS explicitly teaches how many resources to use in analysis (Fig. 4. Column [04], Line [61-68]-DAS discloses the resources can be allocated for any of a number of different purposes for performing a variety of different tasks, including receiving a query image, classifying the query image, determining whether the query image is a restricted image, among others. The client 420 can access a customer allocated resource environment 402, or sub-environment. The client can provide access to the various resources to users (e.g., employees or contractors) under the credentials or roles for that account. In this example, there can be a set of resources, both computing resources 408 and data resources 410, among others, allocated on behalf of the client in the resource provider environment 312. These can be physical and/or virtual resources, but during the period of allocation the resources (or allocated portions of the resources) are only accessible using credentials associated with the client account. Please also see Column [02], Line [45-68]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of HOLM in view of SHIREY of having a method of local content filtering of images for a mobile device, with the teachings of DAS of having how many resources to use in analysis.
Wherein having HOLM’s method having how many resources to use in analysis.
The motivation behind the modification would have been to obtain a method that improves filtering performance for objectionable content as well as processing efficiency and performance, since both HOLM and DAS concern content filtering applications. Wherein HOLM provides systems and methods that improve the classification performance by proposing sub-regions of an input image, while DAS systems and methods provide customizable content moderation using neural networks with fine-grained and dynamic image classification ontology that improves processing efficiency and performance. Please see HOLM et al. (US 20200302029 A1), Abstract and Paragraph [0109, 0147 and 0151] and DAS et al. (US 10962939 B1), Abstract and Column [10], Line [01-20].
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure.
Wurtenberger et al. (US 8359642 B1)- Methods, devices, and products provide for restricting access to mature content by individuals for whom access to the mature content is designated as inappropriate. A content filter receives a communication, determines that the communication includes an image, and extracts the image. The image is scanned for mature content. A content restrictor component restricts access by various classes of users to the mature content............................ Please see Fig. 1-3. Abstract.
GUO et al. (US 20170134406 A1)- A digital content system enables users of the content system to access, view and interact with digital content items in a safe, efficient and enjoyable online environment. The content system pre-filters an image content item and determines whether the content item is suspicious of having unsafe content, e.g., nudity and pornography. For example, the content system pre-filters an image content item based on the source of the image content item. A content item from a source known for providing safe content is determined to be safe. The content system determines an image content item to be safe if the content item matches a content item known to be safe or if the content item contains less than a threshold amount of human skin. The content system may further verify the content of the image content item with a verification service and takes remedial actions based on the verification result........................... Please see Fig. 1-2. Abstract.
Zavesky et al. (US 20200077144 A1)- Concepts and technologies directed to screening streaming content with locked application programming interfaces are disclosed herein. Embodiments can include a system that is configured to perform operations that can include detecting a content stream directed to a media application on a user equipment, where audiovisual content of the content stream is presented on a display. The operations can include determining that an application programming interface corresponding to the media application is locked such that the audiovisual content from the content stream is not accessible via the application programming interface; accessing the audiovisual content that is being presented on the display without accessing the application programming interface corresponding to the media application; and scraping the audiovisual content from the display for a time period, wherein the scraping creates scraped audiovisual content corresponding to the audiovisual content that was presented on the display during the time period......................... Please see Fig. 1-4. Abstract.
MEI et al. (US 20200162412 A1)- A computer implemented method of pre-emptively blocking an electronic communication is provided. The computer implemented method includes inputting an electronic communication history, wherein the electronic communication history includes a plurality of electronic communications and a set of corresponding recipients for each of the plurality of electronic communications. The computer implemented method further includes normalizing the plurality of electronic communications, and extracting a topic from each of the plurality of electronic communications. The computer implemented method further includes clustering the plurality of electronic communications according to the extracted topics, and digesting the plurality of electronic communication to form a positive learning data set and a negative learning data set to train a neural network. The computer implemented method further includes training the neural network on the positive learning data set and the negative learning data set, and preparing a positive neutral network model and a negative neural network model......................... Please see Fig. 1-4. Abstract.
AVILA et al. (US 20170289624 A1)- A multimodal and real-time method for filtering sensitive content, receiving as input a digital video stream, the method including segmenting digital video into video fragments along the video timeline; extracting features containing significant information from the digital video input on sensitive media; reducing the semantic difference between each of the low-level video features, and the high-level sensitive concept; classifying the video fragments, generating a high-level label (positive or negative), with a confidence score for each fragment representation; performing high-level fusion to properly match the possible high-level labels and confidence scores for each fragment; and predicting the sensitive time by combining the labels of the fragments along the video timeline, indicating the moments when the content becomes sensitive....................... Please see Fig. 2-4. Abstract.
Vaughn et al. (US 20200145723 A1)- Aspects of the present invention provide an approach for customizing media content being consumed at a location. For each of the viewers in a group consuming the media content at the location, a media profile having a set of media content preferences is created. These media profiles are aggregated to generate a composite profile that has a set of content restriction preferences for the group. As the media content is provided to and being consumed by the group, the media content is analyzed to identify any elements that have attributes that may be unsuitable to some viewers. If an element has a suitability attribute that violates the content restriction preferences for the group, the media content is modified to filter out the element......................... Please see Fig. 2-6. Abstract.
FOX et al. (US 20200077150 A1)- A method of filtering images of live stream content may include defining a prohibited frame content template; analyzing live stream content at a frame level to determine content within each frame of the live stream content; and comparing a frame of the live stream content against the prohibited frame content template to detect prohibited content in the frame that matches prohibited frame content as defined by the prohibited frame content template.......................... Please see Fig. 4. Abstract.
EMENS et al. (US 6493744 B1)-An automatic method for rating data files for objectionable content in a distributed computer system includes preprocessing the file to create semantic units, comparing the semantic units with a rating repository containing entries and associated ratings, assigning content rating vectors to the semantic units, and creating a modified data file incorporating rating information derived from the content rating vectors. For text files, the semantic units are words or phrases, and the rating repository also contains words or phrases with corresponding content rating vectors. For audio files, the file is first converted to a text file using voice recognition software. For image files, image processing software is used to recognize individual objects and compare them to basic images and ratings stored in the rating repository.......................... Please see Fig. 1, 3 and 4. Abstract.
PEARCE et al. (US 20200169787 A1)- Systems and methods are described herein for recommending content restrictions to a user based on chatter in a social network of the user. The system analyzes chatter in the social network to identify a correlation between what is posted by users and the content that the users are posting about. The system stores a mapping between chatter and expected attributes of the content referenced by the chatter. The system will determine whether to block the content when an expected attribute is associated with a content restriction.......................... Please see Fig. 1-3. Abstract.
GNANASEKARAN et al. (US 20160171109 A1)- system may perform web content filtering in real time. In particular, the system may review and analyze the web contents, including any image, video, sound, voices, text, to identify and filter out any inappropriate for a user as the system is receiving the web content in real time. In an embodiment, the content analysis may include voice recognition, image recognition, natural language processing with multi-lingual support. Thus, the system may analyze and filter out web contents that are inappropriate for a user in real time. Further, the system may learn and build patterns of sound, image, video, text language that resemble inappropriate contents and may use the patterns to identify web contents that are not appropriate to the user......................... Please see Fig. 1-4. Abstract.
RYAN et al. (US 20170061248 A1)-The present invention is directed at a system, method and device for detecting offensive content on a portable electronic device, by monitoring communications sent, received or stored on the portable electronic device, and wherein monitoring comprises collecting content data, classifying content data by calculating an alert score for content data wherein an alert score corresponds to offensive content detected, and sending an alert notification to a second portable electronic device to alert the detection of offensive content on the first portable electronic device........................ Please see Fig. 2-6. Abstract.
BROWN et al. (US 20130117464 A1)-Personalized media filtering for a mobile electronic device can be implemented. By supporting receipt of personal media filter criteria, flexible personalization options can be implemented. Personalized media filtering can allow for interactively receiving personal media filter criteria and applying the filter criteria to the media content during the media presentation. One possible blocking response is to block portions of the content. Other possible responses include switching between broadcast stations or playlists. A prevalence metric can indicate how often a particular content item, such as a word, has been filtered from the media content......................... Please see Fig. 1-4. Abstract.
EBADOLLAHI et al. (US 20090234831 A1)- The present invention is directed to a method and apparatus for assisting in rating and filtering multimedia content, such as images, videos and sound recordings. One embodiment comprises a computer implemented method for rating the objectionability of specified digital content that comprises one or more discrete content items, wherein the method includes the step of moving the specified content to one or more filtering stages in a succession of filtering stages. After the specified content is moved to a given one of the filtering stages, a rating procedure is carried out to determine whether a rating can be applied to one or more of the content items, and if so, a selected rating is applied to each of the one or more content items......................... Please see Fig. 1-4. Abstract.
MITTAL (US 11450104 B1)-Techniques are generally described for removal of objectionable content from video streams. In various examples, a first frame of image data comprising a two-dimensional grid of pixels is received. First data identifying at least one pixel of the first frame for obfuscation prior to display by a recipient computing device may be received. In some examples, a first segmentation map may be generated based at least in part on the first data. In some examples, pixel values of one or more pixels of the first frame of image data may be changed according to the first segmentation map. In some examples, the first frame of image data may be sent to a recipient computing device........................ Please see Fig. 4-7. Abstract.
LAMBE et al. (US 20090128573 A1)-In one aspect, the invention relates to a method for blocking or otherwise regulating content. The method includes the steps of intercepting a call to a graphics API; determining if the image meets the requirements for further analysis; and if the image meets the requirements for further analysis, generating a structure to represent the array of pixels in the image; analyzing the image structure for determination of inappropriate content; and preventing the display of the image if the determination is that the content is inappropriate........................ Please see Fig. 4-5. Abstract.
CHERIFI et al. (US 9762462 B2)-An approach is provided for an anti-bullying service. A service platform monitors interaction data from one or more applications, wherein the interaction data is associated with an interaction between a source and a target. The service platform analyzes the interaction data to parse one or more indicators of a monitored conduct between the source and the target. The service platform then initiates at least one of (a) a recording of the interaction data; (b) a transmission of an alert message, the one or more indicators, the interaction data, the monitored conduct, or a combination thereof to a third party; and (c) a pushing of an anti-conduct application to a source device associated with the source, a target device associated with the target, or a combination thereof based on the monitored conduct, the one or more indicators, or a combination thereof........................ Please see Fig. 1 and 7-9. Abstract.
Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673 /CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673