Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per claim 1:
Claim 1 recites the limitation "digital notes" in line 3. There is insufficient antecedent basis for this limitation in the claim.
As per the limitation, “capturing physical notes on the wall or surface, via an image capture device, and converting the physical notes to corresponding digital notes;” it is not clear exactly what is attempting to be claimed. Are the corresponding digital notes, existing digital notes as in digital notes reference before the quoted limitations or are these corresponding digital notes different digital notes? Are corresponding digital notes referencing physical notes?
As per the limitations, “displaying on a mixed reality device the digital notes combined with the physical notes when the wall or surface is viewed through the mixed reality device; and displaying the digital notes on a common electronic board for the users.” It is not clear what the digital notes are referencing. Are they referencing “corresponding digital notes” above which have the indefinite issues as detailed above? Are the referencing line 3 digital notes or line 1 digital notes? The claims as a whole are largely unclear. Under this rationale a best attempt of prior art will be applied.
As per claim 2:
Claim 2 recites the limitation "digital notes" in line 2. There is insufficient antecedent basis for this limitation in the claim.
As per claim 3:
Claim 3 recites the limitation "digital notes" in line 2. There is insufficient antecedent basis for this limitation in the claim.
As per claim 6:
Claim 6 recites the limitation "the displayed digital notes" in line 2. There is insufficient antecedent basis for this limitation in the claim.
As per claim 7:
As per the limitation, “converting the detected physical note to a corresponding digital note; and” it is not clear exactly what is attempting to be claimed. Is a corresponding digital note, existing digital note as in digital note reference before the quoted limitations or are these corresponding digital note different digital notes? Are corresponding digital notes referencing physical notes?
As per limitation, “if the physical note has an obstruction, removing the obstruction in the digital note.” it is not clear exactly what is attempting to be claimed. The claim limitation is attempting to provide a condition, but it is not clear what happens when the condition fails. As in if a physical note doesn’t have obstruction what happens? Additionally is a different conditional altogether different supposed to happen if the limitation fails? This is unclear.
As per claim 8:
Claim 8 recites, “filling in an area of the note corresponding with the obstruction to simulate a full note for the digital note.” It is not exactly clear what is attempting to be claimed. What note is being referenced. The limitation uses the note and it is not clear which usage of a digital note in the independent claim it may be referencing or a physical note or a different note altogether? Furthermore what is a full note? Is it referencing the previous “the note” or any other digital or physical note? The portion citing “the digital note” is also unclear under the rationale as detailed in claim 7. Additionally is “the digital note” referencing “the note” or “a full note”? This is unclear.
As per claim 11:
Claim 11 recites the limitation "digital notes" in line 3. There is insufficient antecedent basis for this limitation in the claim.
The examiner notes the dependent claims are all indefinite due to depending from the independent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 7-10 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
Regarding claim 7, it recite(s) a method for generating a digital note, comprising operations executed by a processor: automatically detecting a physical note via an image capture device; converting the detected physical note to a corresponding digital note; and if the physical note has an obstruction, removing the obstruction in the digital note.
MPEP 2106 III provide a flowchart for the subject matter eligibility test for product and processes. The analysis following the flowchart is as follows:
Step 1: Is the claim to a process, machine, manufacture or composition of matter?
Yes. It recites a method, which is a process.
Step 2A, Prong One: Does the claim recite an abstract idea, law of nature, or nature phenomenon?
Yes. It recites “a method for generating a digital note, comprising operations executed by a processor: automatically detecting a physical note via an image capture device; converting the detected physical note to a corresponding digital note; and if the physical note has an obstruction, removing the obstruction in the digital note.” A method of generating a digital note existed long before a processor existed. Actually, one can mentally carry out a digital note by mentally carrying out the digital note on paper. MPEP 2106.04(a)(2).III.C states a claim that requires a computer may still recite a mental process. Here, using a processor for generating a digital note (a generic computer component) to create a digital note (mental process). Therefore, it still recites the mental process as the abstract idea. And detecting a physical note can also be performed in a person’s mind as a mental process, as a physical note is observable and understood mentally upon observation. Also, a converting a physical note to a digital one can be carried out mentally or in pen and paper. Lastly, a physical note having a physical note have an obstruction can be resolved mentally by not carrying out the obstruction aspect when mentally carrying out the digital note. Note, two sending steps are only to be selected, no further operations are performed according to the claim.
Step 2A, Prong Two: Does the claim recite additional elements that integrate the judicial exception into a practical application?
No. The claim does not recite any additional elements except the limitations identified as abstract idea (mental process) in Step 2A Prong One.
Therefore, this judicial exception is not integrated into a practical application because no additional elements other than the abstract idea limitations.
Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
No. The claim does not recite any additional elements other than the limitations identified as abstract idea (mental process) in Step 2A Prong One.
Therefore, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because no additional elements other than the abstract idea limitations.
Therefore, claim 7 is not eligible subject matter under 35 USC 101.
Regarding claim 8, it depends from claim 7 further comprising: further comprising filling in an area of the note corresponding with the obstruction to simulate a full note for the digital note.
Filling in an area of a note corresponding to the obstruction to simulate a full note for the digital note can be carried out mentally or pencil and paper. This can be carried out mentally by not considering the obstruction or by pencil and paper by using an eraser.
Therefore, claim 8 does not recite any additional elements that can integrate the abstract idea into practical application or amount to significantly more than the judicial exception (the answers to step 2A prong two and step 2B are no.). Claim 8 is not eligible subject matter under 35 USC 101.
Regarding claim 9, it depends from claim 7 further comprising tracking the detected physical note.
Tracking a physical note can be carried out mentally by simply paying attention to a physical note so it is tracked.
Therefore, claim 9 does not recite any additional elements that can integrate the abstract idea into practical application or amount to significantly more than the judicial exception (the answers to step 2A prong two and step 2B are no.). Claim 9 is not eligible subject matter under 35 USC 101.
Regarding claim 10, it depends from claim 7 further comprising determining a moment when the physical note was detected.
Determining the moment when the physical note was detected can be carried out mentally. Simply put mentally being aware of the moment the physical note is detected can be carried out mentally.
Therefore, claim 10 does not recite any additional elements that can integrate the abstract idea into practical application or amount to significantly more than the judicial exception (the answers to step 2A prong two and step 2B are no.). Claim 10 is not eligible subject matter under 35 USC 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1- 6 and 11-14 are rejected under 35 U.S.C. 103 as being unpatentable over Elhadad et al. (US 20230237752 A1) in view of Carraseo et al. (US 12125154 B1)
Regarding claim 1, Elhadad teaches a method (See abstract, “Systems, methods, and non-transitory computer readable media including instructions for extracting content from a virtual display are disclosed.”) for displaying physical and digital notes (¶13, “Some disclosed embodiments may include systems, methods, and non-transitory computer readable media for selectively controlling display of digital objects. These embodiments may involve generating a plurality of digital objects for display in connection with use of a computing device operable in a first display mode and in a second display mode, wherein in the first display mode, the plurality of digital objects are displayed via a physical display connected to the computing device, and in the second display mode, some of the plurality of digital objects are displayed via the physical display, and at least one other of the plurality of digital objects is displayed via a wearable extended reality appliance; determining a usage status of the wearable extended reality appliance; selecting a display mode based on the usage status of the wearable extended reality appliance; and in response to the display mode selection, outputting for presentation the plurality of digital objects in a manner consistent with the selected display mode.” ¶522, “Additionally or alternatively, the information to be transmitted may be determined in such a manner that the information may allow one or more aspects of the scenes captured by the image sensor to be presented to the at least one second virtual writer. The one or more aspects may include, for example, the tangible markings created by the first physical writer, the physical surface on which the tangible markings may be created, and/or any other feature of the captured scenes. For example, the processing of the captured image data may include analyzing the captured image data to extract features such as the tangible markings created by the first physical writer and/or the physical surface on which the tangible markings may be created. For example, at least one processor may analyze the image data to track the movement of the physical writing implement relative to the physical surface and may, based on the tracked movement, determine the created tangible markings. In some examples, at least one processor may determine the look of the physical surface based on captured image data where the hand of the first physical writer and/or the physical writing implement are not covering portions of the physical surface.” Also see ¶523), comprising operations executed by a processor (¶522, executed by a processor): projecting digital notes onto a wall or surface for users (¶77, ¶81, projection, virtual content such as digital notes.);
capturing physical notes on the wall or surface, via an image capture device (¶522, capturing markings such as physical notes that can be on any surface), and converting the physical notes to corresponding digital notes (See Fig. 38 and ¶552);
displaying on a mixed reality device the digital notes combined with the physical notes when the wall or surface is viewed through the mixed reality device (¶515 or ¶516); but doesn’t explicitly disclose and
displaying the digital notes on a common electronic board for the users.
Carrasco teaches displaying the digital notes on a common electronic board for the users (See col. 3 line 28 -38 “ The embodiments provide a way for users to use XR technology to improve information sharing and collaboration between remote locations by using electronic notes in an XR environment. For example, in a typical meeting scenario, when users are in a same physical location, users may collaborate by using a whiteboard, chalkboard, or bulletin board. The users may use markers or chalk to write and erase messages on the board or may affix notes to the board. By using the board in this manner, the users are able to concurrently view information on the board and may interact to write on the board and/or post messages.” See col. 12 line 37-49, “The systems provide for methods that facilitate interactive collaboration using an extended reality environment. In all of the above systems and methods, the XR headsets allow the users of the system to interact in an XR environment. The XR environment includes an electronic whiteboard. The XR environment also includes electronic notepads accessible by the users. The electronic notepads allow the users to create electronic notes, annotate the electronic notes, and move the electronic notes between the electronic notepads and the electronic whiteboard. Some embodiments further use a specialized tablet or a tablet that runs a mobile application or app as an effective approach to facilitate entry and annotation of the electronic notes.”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Elhadad in view of Carrasco as collaborating on a whiteboard centralized and persistent visual workspace that bridges the gap between remote and non-remote users.
Regarding claim 2, Elhadad in view of Carrasco teaches the method of claim 1, wherein the displaying on a mixed reality device operation comprises displaying on an augmented reality device the digital notes combined with the physical notes (See ¶515 digital and physical notes in an extended reality environment. ¶77, “ Other extended reality appliances may include a holographic projector or any other device or system capable of providing an augmented reality (AR), virtual reality (VR), mixed reality (MR), or any immersive experience. Typical components of wearable extended reality appliances may include at least one of: a stereoscopic head-mounted display, a stereoscopic head-mounted sound system, head-motion tracking sensors (such as gyroscopes, accelerometers, magnetometers, image sensors, structured light sensors, etc.), head mounted projectors, eye-tracking sensors, and/or additional components described below. Consistent with another aspect of the disclosure, the extended reality appliance may be a non-wearable extended reality appliance. Specifically, the non-wearable extended reality appliance may include multi-projected environment appliances. In some embodiments, an extended reality appliance may be configured to change the viewing perspective of the extended reality environment in response to movements of the user and in response to head movements of the user in particular. In one example, a wearable extended reality appliance may change the field-of-view of the extended reality environment in response to detecting head movements and determining a change of the head pose of the user. The change the field-of-view of the extended reality environment may be achieved by changing the spatial orientation without changing the spatial position of the user in the extended reality environment. In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world, for example, by changing the spatial position of the user in the extended reality environment without changing the direction of the field-of-view with respect to the spatial position.”).
Regarding claim 3, Elhadad in view of Carrasco teaches the method of claim 1, wherein the displaying on a mixed reality device operation comprises displaying on a virtual reality device the digital notes combined with the physical notes (See ¶515 extended reality environment. ¶77, “ Other extended reality appliances may include a holographic projector or any other device or system capable of providing an augmented reality (AR), virtual reality (VR), mixed reality (MR), or any immersive experience. Typical components of wearable extended reality appliances may include at least one of: a stereoscopic head-mounted display, a stereoscopic head-mounted sound system, head-motion tracking sensors (such as gyroscopes, accelerometers, magnetometers, image sensors, structured light sensors, etc.), head mounted projectors, eye-tracking sensors, and/or additional components described below. Consistent with another aspect of the disclosure, the extended reality appliance may be a non-wearable extended reality appliance. Specifically, the non-wearable extended reality appliance may include multi-projected environment appliances. In some embodiments, an extended reality appliance may be configured to change the viewing perspective of the extended reality environment in response to movements of the user and in response to head movements of the user in particular. In one example, a wearable extended reality appliance may change the field-of-view of the extended reality environment in response to detecting head movements and determining a change of the head pose of the user. The change the field-of-view of the extended reality environment may be achieved by changing the spatial orientation without changing the spatial position of the user in the extended reality environment. In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world, for example, by changing the spatial position of the user in the extended reality environment without changing the direction of the field-of-view with respect to the spatial position.”).
Regarding claim 4, Elhadad in view of Carrasco teaches the method of claim 1, further comprising detecting, via the mixed reality device, gestures from a user of the mixed reality device (¶528, mixed reality with gesture input).
Regarding claims 5, Elhadad teaches the method of claim 4, further comprising interpreting the detected gestures as commands (¶528, mixed reality with gesture input. ¶548).
Regarding claim 6, Elhadad in view of Carrasco teaches the method of claim 1, further comprising updating the common electronic board as users interact with the displayed digital notes (See Fig. 4 and 5, col. 10 line 44 – col. 11 line 25).
Regarding claim 11, Elhadad teaches a method (See abstract, “Systems, methods, and non-transitory computer readable media including instructions for extracting content from a virtual display are disclosed.”) for displaying digital notes (¶13, “Some disclosed embodiments may include systems, methods, and non-transitory computer readable media for selectively controlling display of digital objects. These embodiments may involve generating a plurality of digital objects for display in connection with use of a computing device operable in a first display mode and in a second display mode, wherein in the first display mode, the plurality of digital objects are displayed via a physical display connected to the computing device, and in the second display mode, some of the plurality of digital objects are displayed via the physical display, and at least one other of the plurality of digital objects is displayed via a wearable extended reality appliance; determining a usage status of the wearable extended reality appliance; selecting a display mode based on the usage status of the wearable extended reality appliance; and in response to the display mode selection, outputting for presentation the plurality of digital objects in a manner consistent with the selected display mode.” ¶522, “Additionally or alternatively, the information to be transmitted may be determined in such a manner that the information may allow one or more aspects of the scenes captured by the image sensor to be presented to the at least one second virtual writer. The one or more aspects may include, for example, the tangible markings created by the first physical writer, the physical surface on which the tangible markings may be created, and/or any other feature of the captured scenes. For example, the processing of the captured image data may include analyzing the captured image data to extract features such as the tangible markings created by the first physical writer and/or the physical surface on which the tangible markings may be created. For example, at least one processor may analyze the image data to track the movement of the physical writing implement relative to the physical surface and may, based on the tracked movement, determine the created tangible markings. In some examples, at least one processor may determine the look of the physical surface based on captured image data where the hand of the first physical writer and/or the physical writing implement are not covering portions of the physical surface.” Also see ¶523), comprising operations executed by a processor (¶522, executed by a processor):
projecting digital notes onto a wall or surface for users (¶77, ¶81, projection, virtual content such as digital notes.);
detecting a change in a scene, via an image capture device, on the wall or surface (¶522, capturing markings such as physical notes that can be on any surface);
updating the scene based upon the detected change (See Fig. 38 and ¶552);
projecting a new scene on the wall or surface based upon the updating (¶515 or ¶516); but doesn’t explicitly disclose and
displaying the digital notes on a common electronic board for the users.
Carrasco teaches displaying the digital notes on a common electronic board for the users (See col. 3 line 28 -38 “ The embodiments provide a way for users to use XR technology to improve information sharing and collaboration between remote locations by using electronic notes in an XR environment. For example, in a typical meeting scenario, when users are in a same physical location, users may collaborate by using a whiteboard, chalkboard, or bulletin board. The users may use markers or chalk to write and erase messages on the board or may affix notes to the board. By using the board in this manner, the users are able to concurrently view information on the board and may interact to write on the board and/or post messages.” See col. 12 line 37-49, “The systems provide for methods that facilitate interactive collaboration using an extended reality environment. In all of the above systems and methods, the XR headsets allow the users of the system to interact in an XR environment. The XR environment includes an electronic whiteboard. The XR environment also includes electronic notepads accessible by the users. The electronic notepads allow the users to create electronic notes, annotate the electronic notes, and move the electronic notes between the electronic notepads and the electronic whiteboard. Some embodiments further use a specialized tablet or a tablet that runs a mobile application or app as an effective approach to facilitate entry and annotation of the electronic notes.”).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Elhadad in view of Carrasco as collaborating on a whiteboard centralized and persistent visual workspace that bridges the gap between remote and non-remote users.
Regarding claim 12, Elhadad in view of Carrasco teaches the method of claim 11, wherein the detecting operation comprises detecting a change in content of the digital notes in the scene (See Fig. 2, “Figure 2. The architecture of the BEDSR-Net. It consists of two sub-networks: BE-Net for estimating the global background color of the document and SR-Net for removing shadows. Given the input shadow image, BE-Net predicts the background color. As a side product, it generates an attention map, which depicts how likely each pixel belongs to the shadow-free background. With the help of the attention map, our model removes the typical requirement of ground-truth shadow masks for training. Along with the input shadow image, the estimated background color and the attention map are fed into the SR-Net for determining the shadow-free version of the input shadow image.” )
Regarding claim 13, Elhadad in view of Carrasco teaches the method of claim 11, wherein the detecting operation comprises detecting a change in position of the digital notes in the scene (See ¶515 digital and physical notes in an extended reality environment. ¶77, “ Other extended reality appliances may include a holographic projector or any other device or system capable of providing an augmented reality (AR), virtual reality (VR), mixed reality (MR), or any immersive experience. Typical components of wearable extended reality appliances may include at least one of: a stereoscopic head-mounted display, a stereoscopic head-mounted sound system, head-motion tracking sensors (such as gyroscopes, accelerometers, magnetometers, image sensors, structured light sensors, etc.), head mounted projectors, eye-tracking sensors, and/or additional components described below. Consistent with another aspect of the disclosure, the extended reality appliance may be a non-wearable extended reality appliance. Specifically, the non-wearable extended reality appliance may include multi-projected environment appliances. In some embodiments, an extended reality appliance may be configured to change the viewing perspective of the extended reality environment in response to movements of the user and in response to head movements of the user in particular. In one example, a wearable extended reality appliance may change the field-of-view of the extended reality environment in response to detecting head movements and determining a change of the head pose of the user. The change the field-of-view of the extended reality environment may be achieved by changing the spatial orientation without changing the spatial position of the user in the extended reality environment. In another example, a non-wearable extended reality appliance may change the spatial position of the user in the extended reality environment in response to a change in the position of the user in the real world, for example, by changing the spatial position of the user in the extended reality environment without changing the direction of the field-of-view with respect to the spatial position.”).
Regarding claim 14, Elhadad in view of Carrasco teaches the method of claim 11, wherein the detecting operation comprises detecting a change in quantity of the digital notes in the scene (See Fig. 1, “Figure 1. An example of document shadow removal. Previous methods, Kligler et al.’s method [16], Bako et al.’s method [1], and ST-CGAN [28], exhibit artifacts such as shadow edges (d), color washout (e) and residual shadows (f) in their results. Our result (c) has much fewer artifacts and is very close to the ground-truth shadow-free image (b).”, See Fig. 2, “Figure 2. The architecture of the BEDSR-Net. It consists of two sub-networks: BE-Net for estimating the global background color of the document and SR-Net for removing shadows. Given the input shadow image, BE-Net predicts the background color. As a side product, it generates an attention map, which depicts how likely each pixel belongs to the shadow-free background. With the help of the attention map, our model removes the typical requirement of ground-truth shadow masks for training. Along with the input shadow image, the estimated background color and the attention map are fed into the SR-Net for determining the shadow-free version of the input shadow image.” The replacing of the shadows increases the quantity of digital notes as more digital notes are visible).
Claim(s) 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Elhadad et al. (US 20230237752 A1) in view of Lin et al. (BEDSR-Net: A Deep Shadow Removal Network from a Single Document Image).
Regarding claim 7, Elhadad teaches a method (See abstract, “Systems, methods, and non-transitory computer readable media including instructions for extracting content from a virtual display are disclosed.”) for generating a digital note (¶13, “Some disclosed embodiments may include systems, methods, and non-transitory computer readable media for selectively controlling display of digital objects. These embodiments may involve generating a plurality of digital objects for display in connection with use of a computing device operable in a first display mode and in a second display mode, wherein in the first display mode, the plurality of digital objects are displayed via a physical display connected to the computing device, and in the second display mode, some of the plurality of digital objects are displayed via the physical display, and at least one other of the plurality of digital objects is displayed via a wearable extended reality appliance; determining a usage status of the wearable extended reality appliance; selecting a display mode based on the usage status of the wearable extended reality appliance; and in response to the display mode selection, outputting for presentation the plurality of digital objects in a manner consistent with the selected display mode.” ¶522, “Additionally or alternatively, the information to be transmitted may be determined in such a manner that the information may allow one or more aspects of the scenes captured by the image sensor to be presented to the at least one second virtual writer. The one or more aspects may include, for example, the tangible markings created by the first physical writer, the physical surface on which the tangible markings may be created, and/or any other feature of the captured scenes. For example, the processing of the captured image data may include analyzing the captured image data to extract features such as the tangible markings created by the first physical writer and/or the physical surface on which the tangible markings may be created. For example, at least one processor may analyze the image data to track the movement of the physical writing implement relative to the physical surface and may, based on the tracked movement, determine the created tangible markings. In some examples, at least one processor may determine the look of the physical surface based on captured image data where the hand of the first physical writer and/or the physical writing implement are not covering portions of the physical surface.” Also see ¶523, comprising operations executed by a processor (¶522, executed by a processor):
automatically detecting a physical note via an image capture device (¶522, “Additionally or alternatively, the information to be transmitted may be determined in such a manner that the information may allow one or more aspects of the scenes captured by the image sensor to be presented to the at least one second virtual writer. The one or more aspects may include, for example, the tangible markings created by the first physical writer, the physical surface on which the tangible markings may be created, and/or any other feature of the captured scenes. For example, the processing of the captured image data may include analyzing the captured image data to extract features such as the tangible markings created by the first physical writer and/or the physical surface on which the tangible markings may be created. For example, at least one processor may analyze the image data to track the movement of the physical writing implement relative to the physical surface and may, based on the tracked movement, determine the created tangible markings. In some examples, at least one processor may determine the look of the physical surface based on captured image data where the hand of the first physical writer and/or the physical writing implement are not covering portions of the physical surface.”, capturing markings such as physical notes is on any surface);
converting the detected physical note to a corresponding digital note (See Fig. 38 and ¶552); and but doesn’t explicitly disclose
if the physical note has an obstruction, removing the obstruction in the digital note.
Lin teaches if the physical note has an obstruction, removing the obstruction in the digital note (See Fig. 1, “Figure 1. An example of document shadow removal. Previous methods, Kligler et al.’s method [16], Bako et al.’s method [1], and ST-CGAN [28], exhibit artifacts such as shadow edges (d), color washout (e) and residual shadows (f) in their results. Our result (c) has much fewer artifacts and is very close to the ground-truth shadow-free image (b).”, See Fig. 2, “Figure 2. The architecture of the BEDSR-Net. It consists of two sub-networks: BE-Net for estimating the global background color of the document and SR-Net for removing shadows. Given the input shadow image, BE-Net predicts the background color. As a side product, it generates an attention map, which depicts how likely each pixel belongs to the shadow-free background. With the help of the attention map, our model removes the typical requirement of ground-truth shadow masks for training. Along with the input shadow image, the estimated background color and the attention map are fed into the SR-Net for determining the shadow-free version of the input shadow image.” shows removing the obstruction shadows in the digital note representation.).
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Elhadad in view of Lin as removing the obstruction clears up what is being read thus adding clarity and increasing visibility to allow notes to be easier to read.
Regarding claim 8, Elhadad in view of Lin teaches the method of claim 7, further comprising filling in an area of the note corresponding with the obstruction to simulate a full note for the digital note (See Fig. 1, “Figure 1. An example of document shadow removal. Previous methods, Kligler et al.’s method [16], Bako et al.’s method [1], and ST-CGAN [28], exhibit artifacts such as shadow edges (d), color washout (e) and residual shadows (f) in their results. Our result (c) has much fewer artifacts and is very close to the ground-truth shadow-free image (b).”, See Fig. 2, “Figure 2. The architecture of the BEDSR-Net. It consists of two sub-networks: BE-Net for estimating the global background color of the document and SR-Net for removing shadows. Given the input shadow image, BE-Net predicts the background color. As a side product, it generates an attention map, which depicts how likely each pixel belongs to the shadow-free background. With the help of the attention map, our model removes the typical requirement of ground-truth shadow masks for training. Along with the input shadow image, the estimated background color and the attention map are fed into the SR-Net for determining the shadow-free version of the input shadow image.” The replacing the shadows is filling in area of the obstruction and simulates a full note as it is digitally represented.).
Regarding claim 9, Elhadad in view of Lin teaches the method of claim 7, further comprising tracking the detected physical note (¶522, “Additionally or alternatively, the information to be transmitted may be determined in such a manner that the information may allow one or more aspects of the scenes captured by the image sensor to be presented to the at least one second virtual writer. The one or more aspects may include, for example, the tangible markings created by the first physical writer, the physical surface on which the tangible markings may be created, and/or any other feature of the captured scenes. For example, the processing of the captured image data may include analyzing the captured image data to extract features such as the tangible markings created by the first physical writer and/or the physical surface on which the tangible markings may be created. For example, at least one processor may analyze the image data to track the movement of the physical writing implement relative to the physical surface and may, based on the tracked movement, determine the created tangible markings. In some examples, at least one processor may determine the look of the physical surface based on captured image data where the hand of the first physical writer and/or the physical writing implement are not covering portions of the physical surface.”).
Regarding claim 10, Elhadad in view of Lin teaches the method of claim 7, further comprising determining a moment when the physical note was detected (¶522, “Additionally or alternatively, the information to be transmitted may be determined in such a manner that the information may allow one or more aspects of the scenes captured by the image sensor to be presented to the at least one second virtual writer. The one or more aspects may include, for example, the tangible markings created by the first physical writer, the physical surface on which the tangible markings may be created, and/or any other feature of the captured scenes. For example, the processing of the captured image data may include analyzing the captured image data to extract features such as the tangible markings created by the first physical writer and/or the physical surface on which the tangible markings may be created. For example, at least one processor may analyze the image data to track the movement of the physical writing implement relative to the physical surface and may, based on the tracked movement, determine the created tangible markings. In some examples, at least one processor may determine the look of the physical surface based on captured image data where the hand of the first physical writer and/or the physical writing implement are not covering portions of the physical surface.”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT J CRADDOCK whose telephone number is (571)270-7502. The examiner can normally be reached Monday - Friday 10:00 AM - 6 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona E Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROBERT J CRADDOCK/Primary Examiner, Art Unit 2618